Science.gov

Sample records for additive poisson models

  1. Relaxed Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. PMID:26686485

  2. Modelling of filariasis in East Java with Poisson regression and generalized Poisson regression models

    NASA Astrophysics Data System (ADS)

    Darnah

    2016-04-01

    Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.

  3. Impact of Influenza on Outpatient Visits, Hospitalizations, and Deaths by Using a Time Series Poisson Generalized Additive Model

    PubMed Central

    Guo, Ru-ning; Zheng, Hui-zhen; Ou, Chun-quan; Huang, Li-qun; Zhou, Yong; Zhang, Xin; Liang, Can-kun; Lin, Jin-yan; Zhong, Hao-jie; Song, Tie; Luo, Hui-ming

    2016-01-01

    Background The disease burden associated with influenza in developing tropical and subtropical countries is poorly understood owing to the lack of a comprehensive disease surveillance system and information-exchange mechanisms. The impact of influenza on outpatient visits, hospital admissions, and deaths has not been fully demonstrated to date in south China. Methods A time series Poisson generalized additive model was used to quantitatively assess influenza-like illness (ILI) and influenza disease burden by using influenza surveillance data in Zhuhai City from 2007 to 2009, combined with the outpatient, inpatient, and respiratory disease mortality data of the same period. Results The influenza activity in Zhuhai City demonstrated a typical subtropical seasonal pattern; however, each influenza virus subtype showed a specific transmission variation. The weekly ILI case number and virus isolation rate had a very close positive correlation (r = 0.774, P < 0.0001). The impact of ILI and influenza on weekly outpatient visits was statistically significant (P < 0.05). We determined that 10.7% of outpatient visits were associated with ILI and 1.88% were associated with influenza. ILI also had a significant influence on the hospitalization rates (P < 0.05), but mainly in populations <25 years of age. No statistically significant effect of influenza on hospital admissions was found (P > 0.05). The impact of ILI on chronic obstructive pulmonary disease (COPD) was most significant (P < 0.05), with 33.1% of COPD-related deaths being attributable to ILI. The impact of influenza on the mortality rate requires further evaluation. Conclusions ILI is a feasible indicator of influenza activity. Both ILI and influenza have a large impact on outpatient visits. Although ILI affects the number of hospital admissions and deaths, we found no consistent influence of influenza, which requires further assessment. PMID:26894876

  4. Estimation of count data using mixed Poisson, generalized Poisson and finite Poisson mixture regression models

    NASA Astrophysics Data System (ADS)

    Zamani, Hossein; Faroughi, Pouya; Ismail, Noriszura

    2014-06-01

    This study relates the Poisson, mixed Poisson (MP), generalized Poisson (GP) and finite Poisson mixture (FPM) regression models through mean-variance relationship, and suggests the application of these models for overdispersed count data. As an illustration, the regression models are fitted to the US skin care count data. The results indicate that FPM regression model is the best model since it provides the largest log likelihood and the smallest AIC, followed by Poisson-Inverse Gaussion (PIG), GP and negative binomial (NB) regression models. The results also show that NB, PIG and GP regression models provide similar results.

  5. MODELING PAVEMENT DETERIORATION PROCESSES BY POISSON HIDDEN MARKOV MODELS

    NASA Astrophysics Data System (ADS)

    Nam, Le Thanh; Kaito, Kiyoyuki; Kobayashi, Kiyoshi; Okizuka, Ryosuke

    In pavement management, it is important to estimate lifecycle cost, which is composed of the expenses for repairing local damages, including potholes, and repairing and rehabilitating the surface and base layers of pavements, including overlays. In this study, a model is produced under the assumption that the deterioration process of pavement is a complex one that includes local damages, which occur frequently, and the deterioration of the surface and base layers of pavement, which progresses slowly. The variation in pavement soundness is expressed by the Markov deterioration model and the Poisson hidden Markov deterioration model, in which the frequency of local damage depends on the distribution of pavement soundness, is formulated. In addition, the authors suggest a model estimation method using the Markov Chain Monte Carlo (MCMC) method, and attempt to demonstrate the applicability of the proposed Poisson hidden Markov deterioration model by studying concrete application cases.

  6. Application of Poisson random effect models for highway network screening.

    PubMed

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. PMID:24269863

  7. Nonlocal Poisson-Fermi model for ionic solvent

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  8. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution. PMID:27575084

  9. A Poisson model for random multigraphs

    PubMed Central

    Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth

    2010-01-01

    Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690

  10. Poisson-Boltzmann-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-01

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  11. Poisson-Boltzmann-Nernst-Planck model.

    PubMed

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  12. Poisson-Boltzmann-Nernst-Planck model

    SciTech Connect

    Zheng Qiong; Wei Guowei

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  13. Analyzing Historical Count Data: Poisson and Negative Binomial Regression Models.

    ERIC Educational Resources Information Center

    Beck, E. M.; Tolnay, Stewart E.

    1995-01-01

    Asserts that traditional approaches to multivariate analysis, including standard linear regression techniques, ignore the special character of count data. Explicates three suitable alternatives to standard regression techniques, a simple Poisson regression, a modified Poisson regression, and a negative binomial model. (MJP)

  14. Collision prediction models using multivariate Poisson-lognormal regression.

    PubMed

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models. PMID:19540972

  15. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  16. Periodic Poisson model for beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Dohlus, M.; Henning, Ch.

    2016-03-01

    A method is described to solve the Poisson problem for a three dimensional source distribution that is periodic into one direction. Perpendicular to the direction of periodicity a free space (or open) boundary condition is realized. In beam physics, this approach allows us to calculate the space charge field of a continualized charged particle distribution with periodic pattern. The method is based on a particle-mesh approach with equidistant grid and fast convolution with a Green's function. The periodic approach uses only one period of the source distribution, but a periodic extension of the Green's function. The approach is numerically efficient and allows the investigation of periodic- and pseudoperiodic structures with period lengths that are small compared to the source dimensions, for instance of laser modulated beams or of the evolution of micro bunch structures. Applications for laser modulated beams are given.

  17. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  18. On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories

    SciTech Connect

    Klimčík, Ctirad

    2015-12-15

    We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.

  19. The Poisson-Boltzmann model for tRNA

    PubMed Central

    Gruziel, Magdalena; Grochowski, Pawel; Trylska, Joanna

    2008-01-01

    Using tRNA molecule as an example, we evaluate the applicability of the Poisson-Boltzmann model to highly charged systems such as nucleic acids. Particularly, we describe the effect of explicit crystallographic divalent ions and water molecules, ionic strength of the solvent, and the linear approximation to the Poisson-Boltzmann equation on the electrostatic potential and electrostatic free energy. We calculate and compare typical similarity indices and measures, such as Hodgkin index and root mean square deviation. Finally, we introduce a modification to the nonlinear Poisson-Boltzmann equation, which accounts in a simple way for the finite size of mobile ions, by applying a cutoff in the concentration formula for ionic distribution at regions of high electrostatic potentials. We test the influence of this ionic concentration cutoff on the electrostatic properties of tRNA. PMID:18432617

  20. Wide-area traffic: The failure of Poisson modeling

    SciTech Connect

    Paxson, V.; Floyd, S.

    1994-08-01

    Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.

  1. Studying Resist Stochastics with the Multivariate Poisson Propagation Model

    DOE PAGESBeta

    Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; Bhattarai, Suchit; Neureuther, Andrew

    2014-01-01

    Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.

  2. Bayesian spatial modeling of HIV mortality via zero-inflated Poisson models.

    PubMed

    Musal, Muzaffer; Aktekin, Tevfik

    2013-01-30

    In this paper, we investigate the effects of poverty and inequality on the number of HIV-related deaths in 62 New York counties via Bayesian zero-inflated Poisson models that exhibit spatial dependence. We quantify inequality via the Theil index and poverty via the ratios of two Census 2000 variables, the number of people under the poverty line and the number of people for whom poverty status is determined, in each Zip Code Tabulation Area. The purpose of this study was to investigate the effects of inequality and poverty in addition to spatial dependence between neighboring regions on HIV mortality rate, which can lead to improved health resource allocation decisions. In modeling county-specific HIV counts, we propose Bayesian zero-inflated Poisson models whose rates are functions of both covariate and spatial/random effects. To show how the proposed models work, we used three different publicly available data sets: TIGER Shapefiles, Census 2000, and mortality index files. In addition, we introduce parameter estimation issues of Bayesian zero-inflated Poisson models and discuss MCMC method implications. PMID:22807006

  3. Numerical Poisson-Boltzmann Model for Continuum Membrane Systems.

    PubMed

    Botello-Smith, Wesley M; Liu, Xingping; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2013-01-01

    Membrane protein systems are important computational research topics due to their roles in rational drug design. In this study, we developed a continuum membrane model utilizing a level set formulation under the numerical Poisson-Boltzmann framework within the AMBER molecular mechanics suite for applications such as protein-ligand binding affinity and docking pose predictions. Two numerical solvers were adapted for periodic systems to alleviate possible edge effects. Validation on systems ranging from organic molecules to membrane proteins up to 200 residues, demonstrated good numerical properties. This lays foundations for sophisticated models with variable dielectric treatments and second-order accurate modeling of solvation interactions. PMID:23439886

  4. Modeling Repeated Count Data: Some Extensions of the Rasch Poisson Counts Model.

    ERIC Educational Resources Information Center

    Duijn, Marijtje A. J. van; Jansen, Margo G. H.

    1995-01-01

    The Rasch Poisson Counts Model, a unidimensional latent trait model for tests that postulates that intensity parameters are products of test difficulty and subject ability parameters, is expanded into the Dirichlet-Gamma-Poisson model that takes into account variation between subjects and interaction between subjects and tests. (SLD)

  5. A bivariate survival model with compound Poisson frailty

    PubMed Central

    Wienke, A.; Ripatti, S.; Palmgren, J.; Yashin, A.

    2015-01-01

    A correlated frailty model is suggested for analysis of bivariate time-to-event data. The model is an extension of the correlated power variance function (PVF) frailty model (correlated three-parameter frailty model). It is based on a bivariate extension of the compound Poisson frailty model in univariate survival analysis. It allows for a non-susceptible fraction (of zero frailty) in the population, overcoming the common assumption in survival analysis that all individuals are susceptible to the event under study. The model contains the correlated gamma frailty model and the correlated inverse Gaussian frailty model as special cases. A maximum likelihood estimation procedure for the parameters is presented and its properties are studied in a small simulation study. This model is applied to breast cancer incidence data of Swedish twins. The proportion of women susceptible to breast cancer is estimated to be 15 per cent. PMID:19856276

  6. On population size estimators in the Poisson mixture model.

    PubMed

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. PMID:23865502

  7. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate

  8. Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model

    NASA Astrophysics Data System (ADS)

    Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.

    2014-03-01

    Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.

  9. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. PMID:25385093

  10. Identifying Seismicity Levels via Poisson Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Orfanogiannaki, K.; Karlis, D.; Papadopoulos, G. A.

    2010-08-01

    Poisson Hidden Markov models (PHMMs) are introduced to model temporal seismicity changes. In a PHMM the unobserved sequence of states is a finite-state Markov chain and the distribution of the observation at any time is Poisson with rate depending only on the current state of the chain. Thus, PHMMs allow a region to have varying seismicity rate. We applied the PHMM to model earthquake frequencies in the seismogenic area of Killini, Ionian Sea, Greece, between period 1990 and 2006. Simulations of data from the assumed model showed that it describes quite well the true data. The earthquake catalogue is dominated by main shocks occurring in 1993, 1997 and 2002. The time plot of PHMM seismicity states not only reproduces the three seismicity clusters but also quantifies the seismicity level and underlies the degree of strength of the serial dependence of the events at any point of time. Foreshock activity becomes quite evident before the three sequences with the gradual transition to states of cascade seismicity. Traditional analysis, based on the determination of highly significant changes of seismicity rates, failed to recognize foreshocks before the 1997 main shock due to the low number of events preceding that main shock. Then, PHMM has better performance than traditional analysis since the transition from one state to another does not only depend on the total number of events involved but also on the current state of the system. Therefore, PHMM recognizes significant changes of seismicity soon after they start, which is of particular importance for real-time recognition of foreshock activities and other seismicity changes.

  11. A Boussinesq-scaled, pressure-Poisson water wave model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint

    2015-02-01

    Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.

  12. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  13. Poisson-Lie T-duals of the bi-Yang-Baxter models

    NASA Astrophysics Data System (ADS)

    Klimčík, Ctirad

    2016-09-01

    We prove the conjecture of Sfetsos, Siampos and Thompson that suitable analytic continuations of the Poisson-Lie T-duals of the bi-Yang-Baxter sigma models coincide with the recently introduced generalized λ-models. We then generalize this result by showing that the analytic continuation of a generic σ-model of "universal WZW-type" introduced by Tseytlin in 1993 is nothing but the Poisson-Lie T-dual of a generic Poisson-Lie symmetric σ-model introduced by Klimčík and Ševera in 1995.

  14. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online. PMID:24729671

  15. Recovering doping profiles in semiconductor devices with the Boltzmann-Poisson model

    NASA Astrophysics Data System (ADS)

    Cheng, Yingda; Gamba, Irene M.; Ren, Kui

    2011-05-01

    We investigate numerically an inverse problem related to the Boltzmann-Poisson system of equations for transport of electrons in semiconductor devices. The objective of the (ill-posed) inverse problem is to recover the doping profile of a device, presented as a source function in the mathematical model, from its current-voltage characteristics. To reduce the degree of ill-posedness of the inverse problem, we proposed to parameterize the unknown doping profile function to limit the number of unknowns in the inverse problem. We showed by numerical examples that the reconstruction of a few low moments of the doping profile is possible when relatively accurate time-dependent or time-independent measurements are available, even though the later reconstruction is less accurate than the former. We also compare reconstructions from the Boltzmann-Poisson (BP) model to those from the classical drift-diffusion-Poisson (DDP) model, assuming that measurements are generated with the BP model. We show that the two type of reconstructions can be significantly different in regimes where drift-diffusion-Poisson equation fails to model the physics accurately. However, when noise presented in measured data is high, no difference in the reconstructions can be observed.

  16. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

    ERIC Educational Resources Information Center

    Baschera, Gian-Marco; Gross, Markus

    2010-01-01

    We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

  17. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  18. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  19. Hidden Markov Models for Zero-Inflated Poisson Counts with an Application to Substance Use

    PubMed Central

    DeSantis, Stacia M.; Bandyopadhyay, Dipankar

    2011-01-01

    Paradigms for substance abuse cue-reactivity research involve short term pharmacological or stressful stimulation designed to elicit stress and craving responses in cocaine-dependent subjects. It is unclear as to whether stress induced from participation in such studies increases drug-seeking behavior. We propose a 2-state Hidden Markov model to model the number of cocaine abuses per week before and after participation in a stress- and cue-reactivity study. The hypothesized latent state corresponds to ‘high’ or ‘low’ use. To account for a preponderance of zeros, we assume a zero-inflated Poisson model for the count data. Transition probabilities depend on the prior week’s state, fixed demographic variables, and time-varying covariates. We adopt a Bayesian approach to model fitting, and use the conditional predictive ordinate statistic to demonstrate that the zero-inflated Poisson hidden Markov model outperforms other models for longitudinal count data. PMID:21538455

  20. Automatic active model initialization via Poisson inverse gradient.

    PubMed

    Li, Bing; Acton, Scott T

    2008-08-01

    Active models have been widely used in image processing applications. A crucial stage that affects the ultimate active model performance is initialization. This paper proposes a novel automatic initialization approach for parametric active models in both 2-D and 3-D. The PIG initialization method exploits a novel technique that essentially estimates the external energy field from the external force field and determines the most likely initial segmentation. Examples and comparisons with two state-of-the- art automatic initialization methods are presented to illustrate the advantages of this innovation, including the ability to choose the number of active models deployed, rapid convergence, accommodation of broken edges, superior noise robustness, and segmentation accuracy. PMID:18632349

  1. How does Poisson kriging compare to the popular BYM model for mapping disease risks?

    PubMed Central

    Goovaerts, Pierre; Gebreab, Samson

    2008-01-01

    Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1) it is easier to implement and less CPU intensive, and 2) it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR) models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM) model and Poisson kriging (point and area-to-area implementations) were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1) state of Indiana that consists of 92 counties of fairly similar size and shape, and 2) four states in the Western US (Arizona, California, Nevada and Utah) forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area) has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models). Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county geography becomes more

  2. An Application of the Poisson Race Model to Confidence Calibration

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Van Zandt, Trisha

    2006-01-01

    In tasks as diverse as stock market predictions and jury deliberations, a person's feelings of confidence in the appropriateness of different choices often impact that person's final choice. The current study examines the mathematical modeling of confidence calibration in a simple dual-choice task. Experiments are motivated by an accumulator…

  3. Study of non-Hodgkin's lymphoma mortality associated with industrial pollution in Spain, using Poisson models

    PubMed Central

    Ramis, Rebeca; Vidal, Enrique; García-Pérez, Javier; Lope, Virginia; Aragonés, Nuria; Pérez-Gómez, Beatriz; Pollán, Marina; López-Abente, Gonzalo

    2009-01-01

    Background Non-Hodgkin's lymphomas (NHLs) have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER) is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs) associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model). Results Only proximity of paper industries to population centres (>2 km) could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27). Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of industrial pollution

  4. A marginalized zero-inflated Poisson regression model with overall exposure effects.

    PubMed

    Long, D Leann; Preisser, John S; Herring, Amy H; Golin, Carol E

    2014-12-20

    The zero-inflated Poisson (ZIP) regression model is often employed in public health research to examine the relationships between exposures of interest and a count outcome exhibiting many zeros, in excess of the amount expected under sampling from a Poisson distribution. The regression coefficients of the ZIP model have latent class interpretations, which correspond to a susceptible subpopulation at risk for the condition with counts generated from a Poisson distribution and a non-susceptible subpopulation that provides the extra or excess zeros. The ZIP model parameters, however, are not well suited for inference targeted at marginal means, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. We develop a marginalized ZIP model approach for independent responses to model the population mean count directly, allowing straightforward inference for overall exposure effects and empirical robust variance estimation for overall log-incidence density ratios. Through simulation studies, the performance of maximum likelihood estimation of the marginalized ZIP model is assessed and compared with other methods of estimating overall exposure effects. The marginalized ZIP model is applied to a recent study of a motivational interviewing-based safer sex counseling intervention, designed to reduce unprotected sexual act counts. PMID:25220537

  5. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. PMID:22633143

  6. Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits

    SciTech Connect

    Vassiliev, Oleg N.

    2012-07-15

    Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on how a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.

  7. A Local Poisson Graphical Model for inferring networks from sequencing data.

    PubMed

    Allen, Genevera I; Liu, Zhandong

    2013-09-01

    Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research. PMID:23955777

  8. Investigation of time and weather effects on crash types using full Bayesian multivariate Poisson lognormal models.

    PubMed

    El-Basyouny, Karim; Barua, Sudip; Islam, Md Tazul

    2014-12-01

    Previous research shows that various weather elements have significant effects on crash occurrence and risk; however, little is known about how these elements affect different crash types. Consequently, this study investigates the impact of weather elements and sudden extreme snow or rain weather changes on crash type. Multivariate models were used for seven crash types using five years of daily weather and crash data collected for the entire City of Edmonton. In addition, the yearly trend and random variation of parameters across the years were analyzed by using four different modeling formulations. The proposed models were estimated in a full Bayesian context via Markov Chain Monte Carlo simulation. The multivariate Poisson lognormal model with yearly varying coefficients provided the best fit for the data according to Deviance Information Criteria. Overall, results showed that temperature and snowfall were statistically significant with intuitive signs (crashes decrease with increasing temperature; crashes increase as snowfall intensity increases) for all crash types, while rainfall was mostly insignificant. Previous snow showed mixed results, being statistically significant and positively related to certain crash types, while negatively related or insignificant in other cases. Maximum wind gust speed was found mostly insignificant with a few exceptions that were positively related to crash type. Major snow or rain events following a dry weather condition were highly significant and positively related to three crash types: Follow-Too-Close, Stop-Sign-Violation, and Ran-Off-Road crashes. The day-of-the-week dummy variables were statistically significant, indicating a possible weekly variation in exposure. Transportation authorities might use the above results to improve road safety by providing drivers with information regarding the risk of certain crash types for a particular weather condition. PMID:25190632

  9. Scaling the Poisson Distribution

    ERIC Educational Resources Information Center

    Farnsworth, David L.

    2014-01-01

    We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.

  10. Kinetic models in n-dimensional Euclidean spaces: From the Maxwellian to the Poisson kernel.

    PubMed

    Zadehgol, Abed

    2015-06-01

    In this work, minimal kinetic theories based on unconventional entropy functions, H∼ln f (Burg entropy) for 2D and H∼f(1-2/n) (Tsallis entropy) for nD with n≥3, are studied. These entropy functions were originally derived by Boghosian et al. [Phys. Rev. E 68, 025103 (2003)] as a basis for discrete-velocity and lattice Boltzmann models for incompressible fluid dynamics. The present paper extends the entropic models of Boghosian et al. and shows that the explicit form of the equilibrium distribution function (EDF) of their models, in the continuous-velocity limit, can be identified with the Poisson kernel of the Poisson integral formula. The conservation and Navier-Stokes equations are recovered at low Mach numbers, and it is shown that rest particles can be used to rectify the speed of sound of the extended models. Fourier series expansion of the EDF is used to evaluate the discretization errors of the model. It is shown that the expansion coefficients of the Fourier series coincide with the velocity moments of the model. Employing two-, three-, and four-dimensional (2D, 3D, and 4D) complex systems, the real velocity space is mapped into the hypercomplex spaces and it is shown that the velocity moments can be evaluated, using the Poisson integral formula, in the hypercomplex space. For the practical applications, a 3D projection of the 4D model is presented, and the existence of an H theorem for the discrete model is investigated. The theoretical results have been verified by simulating the following benchmark problems: (1) the Kelvin-Helmholtz instability of thin shear layers in a doubly periodic domain and (2) the 3D flow of incompressible fluid in a lid-driven cubic cavity. The present results are in agreement with the previous works, while they show better stability of the proposed kinetic model, as compared with the BGK type (with single relaxation time) lattice Boltzmann models. PMID:26172826

  11. Poisson-Fokker-Planck model for biomolecules translocation through nanopore driven by electroosmotic flow

    NASA Astrophysics Data System (ADS)

    Lin, XiaoHui; Zhang, ChiBin; Gu, Jun; Jiang, ShuYun; Yang, JueKuan

    2014-11-01

    A non-continuous electroosmotic flow model (PFP model) is built based on Poisson equation, Fokker-Planck equation and Navier-Stokse equation, and used to predict the DNA molecule translocation through nanopore. PFP model discards the continuum assumption of ion translocation and considers ions as discrete particles. In addition, this model includes the contributions of Coulomb electrostatic potential between ions, Brownian motion of ions and viscous friction to ion transportation. No ionic diffusion coefficient and other phenomenological parameters are needed in the PFP model. It is worth noting that the PFP model can describe non-equilibrium electroosmotic transportation of ions in a channel of a size comparable with the mean free path of ion. A modified clustering method is proposed for the numerical solution of PFP model, and ion current translocation through nanopore with a radius of 1 nm is simulated using the modified clustering method. The external electric field, wall charge density of nanopore, surface charge density of DNA, as well as ion average number density, influence the electroosmotic velocity profile of electrolyte solution, the velocity of DNA translocation through nanopore and ion current blockade. Results show that the ion average number density of electrolyte and surface charge density of nanopore have a significant effect on the translocation velocity of DNA and the ion current blockade. The translocation velocity of DNA is proportional to the surface charge density of nanopore, and is inversely proportional to ion average number density of electrolyte solution. Thus, the translocation velocity of DNAs can be controlled to improve the accuracy of sequencing by adjusting the external electric field, ion average number density of electrolyte and surface charge density of nanopore. Ion current decreases when the ion average number density is larger than the critical value and increases when the ion average number density is lower than the

  12. Electronic monitoring device event modelling on an individual-subject basis using adaptive Poisson regression.

    PubMed

    Knafl, George J; Fennie, Kristopher P; Bova, Carol; Dieckhaus, Kevin; Williams, Ann B

    2004-03-15

    An adaptive approach to Poisson regression modelling is presented for analysing event data from electronic devices monitoring medication-taking. The emphasis is on applying this approach to data for individual subjects although it also applies to data for multiple subjects. This approach provides for visualization of adherence patterns as well as for objective comparison of actual device use with prescribed medication-taking. Example analyses are presented using data on openings of electronic pill bottle caps monitoring adherence of subjects with HIV undergoing highly active antiretroviral therapies. The modelling approach consists of partitioning the observation period, computing grouped event counts/rates for intervals in this partition, and modelling these event counts/rates in terms of elapsed time after entry into the study using Poisson regression. These models are based on adaptively selected sets of power transforms of elapsed time determined by rule-based heuristic search through arbitrary sets of parametric models, thereby effectively generating a smooth non-parametric regression fit to the data. Models are compared using k-fold likelihood cross-validation. PMID:14981675

  13. Mixed additive models

    NASA Astrophysics Data System (ADS)

    Carvalho, Francisco; Covas, Ricardo

    2016-06-01

    We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .

  14. Micromechanical poroelastic finite element and shear-lag models of tendon predict large strain dependent Poisson's ratios and fluid expulsion under tensile loading.

    PubMed

    Ahmadzadeh, Hossein; Freedman, Benjamin R; Connizzo, Brianne K; Soslowsky, Louis J; Shenoy, Vivek B

    2015-08-01

    As tendons are loaded, they reduce in volume and exude fluid to the surrounding medium. Experimental studies have shown that tendon stretching results in a Poisson's ratio greater than 0.5, with a maximum value at small strains followed by a nonlinear decay. Here we present a computational model that attributes this macroscopic observation to the microscopic mechanism of the load transfer between fibrils under stretch. We develop a finite element model based on the mechanical role of the interfibrillar-linking elements, such as thin fibrils that bridge the aligned fibrils or macromolecules such as glycosaminoglycans (GAGs) in the interfibrillar sliding and verify it with a theoretical shear-lag model. We showed the existence of a previously unappreciated structure-function mechanism whereby the Poisson's ratio in tendon is affected by the strain applied and interfibrillar-linker properties, and together these features predict tendon volume shrinkage under tensile loading. During loading, the interfibrillar-linkers pulled fibrils toward each other and squeezed the matrix, leading to the Poisson's ratio larger than 0.5 and fluid expulsion. In addition, the rotation of the interfibrillar-linkers with respect to the fibrils at large strains caused a reduction in the volume shrinkage and eventual nonlinear decay in Poisson's ratio at large strains. Our model also predicts a fluid flow that has a radial pattern toward the surrounding medium, with the larger fluid velocities in proportion to the interfibrillar sliding. PMID:25934322

  15. Simulation of high tensile Poisson's ratios of articular cartilage with a finite element fibril-reinforced hyperelastic model.

    PubMed

    García, José Jaime

    2008-06-01

    Analyses with a finite element fibril-reinforced hyperelastic model were undertaken in this study to simulate high tensile Poisson's ratios that have been consistently documented in experimental studies of articular cartilage. The solid phase was represented by an isotropic matrix reinforced with four sets of fibrils, two of them aligned in orthogonal directions and two oblique fibrils in a symmetric configuration respect to the orthogonal axes. Two distinct hyperelastic functions were used to represent the matrix and the fibrils. Results of the analyses showed that only by considering non-orthogonal fibrils was it possible to represent Poisson's ratios higher than one. Constrains in the grips and finite deformations played a minor role in the calculated Poisson's ratio. This study also showed that the model with oblique fibrils at 45 degrees was able to represent significant differences in Poisson's ratios near 1 documented in experimental studies. However, even considering constrains in the grips, this model was not capable to simulate Poisson's ratios near 2 that have been reported in other studies. The study also confirmed that only with a high relation between the stiffness of the fibers and that of the matrix was it possible to obtain high Poisson's ratios for the tissue. Results suggest that analytical models with a finite number of fibrils are appropriate to represent main mechanical effects of articular cartilage. PMID:17690001

  16. Application of spatial Poisson process models to air mass thunderstorm rainfall

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Fennessy, N. M.; Wang, Qinliang; Rodriguez-Iturbe, I.

    1987-01-01

    Eight years of summer storm rainfall observations from 93 stations in and around the 154 sq km Walnut Gulch catchment of the Agricultural Research Service, U.S. Department of Agriculture, in Arizona are processed to yield the total station depths of 428 storms. Statistical analysis of these random fields yields the first two moments, the spatial correlation and variance functions, and the spatial distribution of total rainfall for each storm. The absolute and relative worth of three Poisson models are evaluated by comparing their prediction of the spatial distribution of storm rainfall with observations from the second half of the sample. The effect of interstorm parameter variation is examined.

  17. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed. PMID:25956002

  18. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model.

    PubMed

    Coley, Rebecca Yates; Brown, Elizabeth R

    2016-07-10

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26869051

  19. Kinetic models in n -dimensional Euclidean spaces: From the Maxwellian to the Poisson kernel

    NASA Astrophysics Data System (ADS)

    Zadehgol, Abed

    2015-06-01

    In this work, minimal kinetic theories based on unconventional entropy functions, H ˜lnf (Burg entropy) for 2D and H ˜f1 -2/n (Tsallis entropy) for n D with n ≥3 , are studied. These entropy functions were originally derived by Boghosian et al. [Phys. Rev. E 68, 025103 (2003), 10.1103/PhysRevE.68.025103] as a basis for discrete-velocity and lattice Boltzmann models for incompressible fluid dynamics. The present paper extends the entropic models of Boghosian et al. and shows that the explicit form of the equilibrium distribution function (EDF) of their models, in the continuous-velocity limit, can be identified with the Poisson kernel of the Poisson integral formula. The conservation and Navier-Stokes equations are recovered at low Mach numbers, and it is shown that rest particles can be used to rectify the speed of sound of the extended models. Fourier series expansion of the EDF is used to evaluate the discretization errors of the model. It is shown that the expansion coefficients of the Fourier series coincide with the velocity moments of the model. Employing two-, three-, and four-dimensional (2D, 3D, and 4D) complex systems, the real velocity space is mapped into the hypercomplex spaces and it is shown that the velocity moments can be evaluated, using the Poisson integral formula, in the hypercomplex space. For the practical applications, a 3D projection of the 4D model is presented, and the existence of an H theorem for the discrete model is investigated. The theoretical results have been verified by simulating the following benchmark problems: (1) the Kelvin-Helmholtz instability of thin shear layers in a doubly periodic domain and (2) the 3D flow of incompressible fluid in a lid-driven cubic cavity. The present results are in agreement with the previous works, while they show better stability of the proposed kinetic model, as compared with the BGK type (with single relaxation time) lattice Boltzmann models.

  20. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step. PMID:23408125

  1. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  2. Poisson`s ratio and crustal seismology

    SciTech Connect

    Christensen, N.I.

    1996-02-10

    This report discusses the use of Poisson`s ratio to place constraints on continental crustal composition. A summary of Poisson`s ratios for many common rock formations is also included with emphasis on igneous and metamorphic rock properties.

  3. WAITING TIME DISTRIBUTION OF SOLAR ENERGETIC PARTICLE EVENTS MODELED WITH A NON-STATIONARY POISSON PROCESS

    SciTech Connect

    Li, C.; Su, W.; Fang, C.; Zhong, S. J.; Wang, L.

    2014-09-10

    We present a study of the waiting time distributions (WTDs) of solar energetic particle (SEP) events observed with the spacecraft WIND and GOES. The WTDs of both solar electron events (SEEs) and solar proton events (SPEs) display a power-law tail of ∼Δt {sup –γ}. The SEEs display a broken power-law WTD. The power-law index is γ{sub 1} = 0.99 for the short waiting times (<70 hr) and γ{sub 2} = 1.92 for large waiting times (>100 hr). The break of the WTD of SEEs is probably due to the modulation of the corotating interaction regions. The power-law index, γ ∼ 1.82, is derived for the WTD of the SPEs which is consistent with the WTD of type II radio bursts, indicating a close relationship between the shock wave and the production of energetic protons. The WTDs of SEP events can be modeled with a non-stationary Poisson process, which was proposed to understand the waiting time statistics of solar flares. We generalize the method and find that, if the SEP event rate λ = 1/Δt varies as the time distribution of event rate f(λ) = Aλ{sup –α}exp (– βλ), the time-dependent Poisson distribution can produce a power-law tail WTD of ∼Δt {sup α} {sup –3}, where 0 ≤ α < 2.

  4. Comparing INLA and OpenBUGS for hierarchical Poisson modeling in disease mapping.

    PubMed

    Carroll, R; Lawson, A B; Faes, C; Kirby, R S; Aregay, M; Watjou, K

    2015-01-01

    The recently developed R package INLA (Integrated Nested Laplace Approximation) is becoming a more widely used package for Bayesian inference. The INLA software has been promoted as a fast alternative to MCMC for disease mapping applications. Here, we compare the INLA package to the MCMC approach by way of the BRugs package in R, which calls OpenBUGS. We focus on the Poisson data model commonly used for disease mapping. Ultimately, INLA is a computationally efficient way of implementing Bayesian methods and returns nearly identical estimates for fixed parameters in comparison to OpenBUGS, but falls short in recovering the true estimates for the random effects, their precisions, and model goodness of fit measures under the default settings. We assumed default settings for ground truth parameters, and through altering these default settings in our simulation study, we were able to recover estimates comparable to those produced in OpenBUGS under the same assumptions. PMID:26530822

  5. Electrical Circuit Modeling Considering a Transient Space Charge for Nonsteady Poisson-Nernst-Planck Equations

    NASA Astrophysics Data System (ADS)

    Sugioka, Hideyuki

    2015-10-01

    Transient space charge phenomena at high step voltages are interesting since they play a central role in many exotic nonequilibrium phenomena of ion dynamics in an electrolyte. However, the fundamental equations [i.e., the nonsteady Poisson-Nernst-Planck (PNP) equations] have not been solved analytically at high applied voltages because of their large nonlinearity. In this study, on the basis of the steady PNP solution, we propose an electrical circuit model that considers transient space charge effects and find that the dc and ac responses of the total charge of the electrical double layer are in fairly good agreement with the numerical results even at large applied voltages. Furthermore, on the basis of this model, we find approximate analytical solutions for the nonsteady PNP equations that are in good agreement with the numerical solutions of the concentration, charge density, and potential distribution at high applied voltages at each time in a surface region.

  6. Elastic-plastic cube model for ultrasonic friction reduction via Poisson's effect.

    PubMed

    Dong, Sheng; Dapino, Marcelo J

    2014-01-01

    Ultrasonic friction reduction has been studied experimentally and theoretically. This paper presents a new elastic-plastic cube model which can be applied to various ultrasonic lubrication cases. A cube is used to represent all the contacting asperities of two surfaces. Friction force is considered as the product of the tangential contact stiffness and the deformation of the cube. Ultrasonic vibrations are projected onto three orthogonal directions, separately changing contact parameters and deformations. Hence, the overall change of friction forces. Experiments are conducted to examine ultrasonic friction reduction using different materials under normal loads that vary from 40 N to 240 N. Ultrasonic vibrations are generated both in longitudinal and vertical (out-of-plane) directions by way of the Poisson effect. The tests show up to 60% friction reduction; model simulations describe the trends observed experimentally. PMID:23850422

  7. Labour and residential accessibility: a Bayesian analysis based on Poisson gravity models with spatial effects

    NASA Astrophysics Data System (ADS)

    Alonso, M. P.; Beamonte, M. A.; Gargallo, P.; Salvador, M. J.

    2014-10-01

    In this study, we measure jointly the labour and the residential accessibility of a basic spatial unit using a Bayesian Poisson gravity model with spatial effects. The accessibility measures are broken down into two components: the attractiveness component, which is related to its socio-economic and demographic characteristics, and the impedance component, which reflects the ease of communication within and between basic spatial units. For illustration purposes, the methodology is applied to a data set containing information about commuters from the Spanish region of Aragón. We identify the areas with better labour and residential accessibility, and we also analyse the attractiveness and the impedance components of a set of chosen localities which allows us to better understand their mobility patterns.

  8. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    ERIC Educational Resources Information Center

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  9. Advanced 3D Poisson solvers and particle-in-cell methods for accelerator modeling

    NASA Astrophysics Data System (ADS)

    Serafini, David B.; McCorquodale, Peter; Colella, Phillip

    2005-01-01

    We seek to improve on the conventional FFT-based algorithms for solving the Poisson equation with infinite-domain (open) boundary conditions for large problems in accelerator modeling and related areas. In particular, improvements in both accuracy and performance are possible by combining several technologies: the method of local corrections (MLC); the James algorithm; and adaptive mesh refinement (AMR). The MLC enables the parallelization (by domain decomposition) of problems with large domains and many grid points. This improves on the FFT-based Poisson solvers typically used as it doesn't require the all-to-all communication pattern that parallel 3d FFT algorithms require, which tends to be a performance bottleneck on current (and foreseeable) parallel computers. In initial tests, good scalability up to 1000 processors has been demonstrated for our new MLC solver. An essential component of our approach is a new version of the James algorithm for infinite-domain boundary conditions for the case of three dimensions. By using a simplified version of the fast multipole method in the boundary-to-boundary potential calculation, we improve on the performance of the Hockney algorithm typically used by reducing the number of grid points by a factor of 8, and the CPU costs by a factor of 3. This is particularly important for large problems where computer memory limits are a consideration. The MLC allows for the use of adaptive mesh refinement, which reduces the number of grid points and increases the accuracy in the Poisson solution. This improves on the uniform grid methods typically used in PIC codes, particularly in beam problems where the halo is large. Also, the number of particles per cell can be controlled more closely with adaptivity than with a uniform grid. To use AMR with particles is more complicated than using uniform grids. It affects depositing particles on the non-uniform grid, reassigning particles when the adaptive grid changes and maintaining the load

  10. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    USGS Publications Warehouse

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  11. Probabilistic prediction of cyanobacteria abundance in a Korean reservoir using a Bayesian Poisson model

    NASA Astrophysics Data System (ADS)

    Cha, YoonKyung; Park, Seok Soon; Kim, Kyunghyun; Byeon, Myeongseop; Stow, Craig A.

    2014-03-01

    There have been increasing reports of harmful algal blooms (HABs) worldwide. However, the factors that influence cyanobacteria dominance and HAB formation can be site-specific and idiosyncratic, making prediction challenging. The drivers of cyanobacteria blooms in Lake Paldang, South Korea, the summer climate of which is strongly affected by the East Asian monsoon, may differ from those in well-studied North American lakes. Using the observational data sampled during the growing season in 2007-2011, a Bayesian hurdle Poisson model was developed to predict cyanobacteria abundance in the lake. The model allowed cyanobacteria absence (zero count) and nonzero cyanobacteria counts to be modeled as functions of different environmental factors. The model predictions demonstrated that the principal factor that determines the success of cyanobacteria was temperature. Combined with high temperature, increased residence time indicated by low outflow rates appeared to increase the probability of cyanobacteria occurrence. A stable water column, represented by low suspended solids, and high temperature were the requirements for high abundance of cyanobacteria. Our model results had management implications; the model can be used to forecast cyanobacteria watch or alert levels probabilistically and develop mitigation strategies of cyanobacteria blooms.

  12. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    NASA Astrophysics Data System (ADS)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  13. Modeling of fatigue crack growth closure considering the integrative effect of cyclic stress ratio, specimen thickness and Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Liu, Jiantao; Du, Pingan; Liu, Xiaobao; Du, Qiang

    2012-07-01

    Key components of large structures in aeronautics industry are required to be made light and have long enough fatigue lives. It is of vital importance to estimate the fatigue life of these structures accurately. Since the FCG process is affected by various factors, no universal model exists due to the complexity of the mechanisms. Most of the existing models are obtained by fitting the experimental data and could hardly describe the integrative effect of most existing factors simultaneously. In order to account for the integrative effect of specimen parameters, material property and loading conditions on FCG process, a new model named integrative influence factor model (IIF) is proposed based on the plasticity-induced crack closure theory. Accordingly to the predictions of crack opening ratio ( γ) and effective stress intensity factor range ratio ( U) with different material under various loading conditions, predictions of γ and U by the IIF model are completely identical to the theoretical results from the plane stress state to the plane strain state when Poisson's ratio equals 1/3. When Poisson's ratio equals 0.3, predictions of γ and U by the IIF model are larger than the predictions by the existing model, and more close to the theoretical results. In addition, it describes the influence of R ratios on γ and U effectively in the whole region from -1.0 to 1.0. Moreover, several sets of test data of FCG rates in 5 kinds of aluminum alloys with various specimen thicknesses under different loading conditions are used to validate the IIF model, most of the test data are situated on the predicted curves or between the two curves that represent the specimen with different thicknesses under the same stress ratio. Some of the test data slightly departure from the predictions by the IIF model due to the surface roughness and errors in measurement. Besides, based on the analysis of the physical rule of crack opening ratios, a relative thickness of specimen is defined

  14. SnIPRE: selection inference using a Poisson random effects model.

    PubMed

    Eilertson, Kirsten E; Booth, James G; Bustamante, Carlos D

    2012-01-01

    We present an approach for identifying genes under natural selection using polymorphism and divergence data from synonymous and non-synonymous sites within genes. A generalized linear mixed model is used to model the genome-wide variability among categories of mutations and estimate its functional consequence. We demonstrate how the model's estimated fixed and random effects can be used to identify genes under selection. The parameter estimates from our generalized linear model can be transformed to yield population genetic parameter estimates for quantities including the average selection coefficient for new mutations at a locus, the synonymous and non-synynomous mutation rates, and species divergence times. Furthermore, our approach incorporates stochastic variation due to the evolutionary process and can be fit using standard statistical software. The model is fit in both the empirical Bayes and Bayesian settings using the lme4 package in R, and Markov chain Monte Carlo methods in WinBUGS. Using simulated data we compare our method to existing approaches for detecting genes under selection: the McDonald-Kreitman test, and two versions of the Poisson random field based method MKprf. Overall, we find our method universally outperforms existing methods for detecting genes subject to selection using polymorphism and divergence data. PMID:23236270

  15. SnIPRE: Selection Inference Using a Poisson Random Effects Model

    PubMed Central

    Eilertson, Kirsten E.; Booth, James G.; Bustamante, Carlos D.

    2012-01-01

    We present an approach for identifying genes under natural selection using polymorphism and divergence data from synonymous and non-synonymous sites within genes. A generalized linear mixed model is used to model the genome-wide variability among categories of mutations and estimate its functional consequence. We demonstrate how the model's estimated fixed and random effects can be used to identify genes under selection. The parameter estimates from our generalized linear model can be transformed to yield population genetic parameter estimates for quantities including the average selection coefficient for new mutations at a locus, the synonymous and non-synynomous mutation rates, and species divergence times. Furthermore, our approach incorporates stochastic variation due to the evolutionary process and can be fit using standard statistical software. The model is fit in both the empirical Bayes and Bayesian settings using the lme4 package in R, and Markov chain Monte Carlo methods in WinBUGS. Using simulated data we compare our method to existing approaches for detecting genes under selection: the McDonald-Kreitman test, and two versions of the Poisson random field based method MKprf. Overall, we find our method universally outperforms existing methods for detecting genes subject to selection using polymorphism and divergence data. PMID:23236270

  16. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere.

    PubMed

    Xie, Dexuan; Volkmer, Hans W; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers. PMID:27176425

  17. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Volkmer, Hans W.; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.

  18. Statistical Inference of Selection and Divergence from a Time-Dependent Poisson Random Field Model

    PubMed Central

    Amei, Amei; Sawyer, Stanley

    2012-01-01

    We apply a recently developed time-dependent Poisson random field model to aligned DNA sequences from two related biological species to estimate selection coefficients and divergence time. We use Markov chain Monte Carlo methods to estimate species divergence time and selection coefficients for each locus. The model assumes that the selective effects of non-synonymous mutations are normally distributed across genetic loci but constant within loci, and synonymous mutations are selectively neutral. In contrast with previous models, we do not assume that the individual species are at population equilibrium after divergence. Using a data set of 91 genes in two Drosophila species, D. melanogaster and D. simulans, we estimate the species divergence time (or 1.68 million years, assuming the haploid effective population size years) and a mean selection coefficient per generation . Although the average selection coefficient is positive, the magnitude of the selection is quite small. Results from numerical simulations are also presented as an accuracy check for the time-dependent model. PMID:22509300

  19. Introduction of effective dielectric constant to the Poisson-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Sawada, Atsushi

    2016-05-01

    The Poisson-Nernst-Planck (PNP) model has been widely used for analyzing impedance or dielectric spectra observed for dilute electrolytic cells. In the analysis, the behavior of mobile ions in the cell under an external electric field has been explained by a conductive nature regardless of ionic concentrations. However, if the cell has parallel-plate blocking electrodes, the mobile ions may also play a role as a dielectric medium in the cell by the effect of space-charge polarization when the ionic concentration is sufficiently low. Thus the mobile ions confined between the blocking electrodes can have conductive and dielectric natures simultaneously, and their intensities are affected by the ionic concentration and the adsorption of solvent molecules on the electrodes. The balance of the conductive and dielectric natures is quantitatively determined by introducing an effective dielectric constant to the PNP model in the data analysis. The generalized PNP model with the effective dielectric constant successfully explains the anomalous frequency-dependent dielectric behaviors brought about by the mobile ions in dilute electrolytic cells, for which the conventional PNP model fails in interpretation.

  20. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE PAGESBeta

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  1. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    SciTech Connect

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chain Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.

  2. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    NASA Astrophysics Data System (ADS)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  3. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme

  4. Effect of air pollution on lung cancer: A poisson regression model based on vital statistics

    SciTech Connect

    Tango, Toshiro

    1994-11-01

    This article describes a Poisson regression model for time trends of mortality to detect the long-term effects of common levels of air pollution on lung cancer, in which the adjustment for cigarette smoking is not always necessary. The main hypothesis to be tested in the model is that if the long-term and common-level air pollution had an effect on lung cancer, the death rate from lung cancer could be expected to increase gradually at a higher rate in the region with relatively high levels of air pollution than in the region with low levels, and that this trend would not be expected for other control diseases in which cigarette smoking is a risk factor. Using this approach, we analyzed the trend of mortality in females aged 40 to 79, from lung cancer and two control diseases, ischemic heart disease and cerebrovascular disease, based on vital statistics in 23 wards of the Tokyo metropolitan area for 1972 to 1988. Ward-specific mean levels per day of SO{sub 2} and NO{sub 2} from 1974 through 1976 estimated by Makino (1978) were used as the ward-specific exposure measure of air pollution. No data on tobacco consumption in each ward is available. Our analysis supported the existence of long-term effects of air pollution on lung cancer. 14 refs., 5 figs., 2 tabs.

  5. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    PubMed

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation

  6. Poisson-Fermi Modeling of the Ion Exchange Mechanism of the Sodium/Calcium Exchanger.

    PubMed

    Liu, Jinn-Liang; Hsieh, Hann-Jeng; Eisenberg, Bob

    2016-03-17

    The ion exchange mechanism of the sodium/calcium exchanger (NCX) crystallized by Liao et al. in 2012 is studied using the Poisson-Fermi theory developed by Liu and Eisenberg in 2014. A cycle of binding and unbinding is proposed to account for the Na(+)/Ca(2+) exchange function of the NCX molecule. Outputs of the theory include electric and steric fields of ions with different sizes, correlations of ions of different charges, and polarization of water, along with number densities of ions, water molecules, and interstitial voids. We calculate the electrostatic and steric potentials of the four binding sites in NCX, i.e., three Na(+) binding sites and one Ca(2+) binding site, with protein charges provided by the software PDB2PQR. The energy profiles of Na(+) and Ca(2+) ions along their respective Na(+) and Ca(2+) pathways in experimental conditions enable us to explain the fundamental mechanism of NCX that extrudes intracellular Ca(2+) across the cell membrane against its chemical gradient by using the downhill gradient of Na(+). Atomic and numerical details of the binding sites are given to illustrate the 3 Na(+):1 Ca(2+) stoichiometry of NCX. The protein NCX is a catalyst. It does not provide (free) energy for transport. All energy for transport in our model comes from the ions in surrounding baths. PMID:26906748

  7. Poisson type models and descriptive statistics of computer network information flows

    SciTech Connect

    Downing, D.; Fedorov, V.; Dunigan, T.; Batsell, S.

    1997-08-01

    Many contemporary publications on network traffic gravitate to ideas of self-similarity and long-range dependence. The corresponding elegant and parsimonious mathematical techniques proved to be efficient for the description of a wide class of aggregated processes. Sharing the enthusiasm about the above ideas the authors also believe that whenever it is possible any problem must be considered at the most basic level in an attempt to understand the driving forces of the processes under analysis. Consequently the authors try to show that some behavioral patterns of descriptive statistics which are typical for long-memory processes (a particular case of long-range dependence) can also be explained in the framework of the traditional Poisson process paradigm. Applying the concepts of inhomogeneity, compoundness and double stochasticity they propose a simple and intuitively transparent approach of explaining the expected shape of the observed histograms of counts and the expected behavior of the sample covariance functions. Matching the images of these two descriptive statistics allows them to infer the presence of trends or double stochasticity in analyzed time series. They considered only statistics which are based on counts. A similar approach may be applied to waiting or inter-arrival time sequences and will be discussed in other publications. They hope that combining the reported results with the statistical methods based on aggregated models may lead to computationally affordable on-line techniques of compact and visualized data analysis of network flows.

  8. Calculation of electron transfer reorganization energies using the finite difference Poisson-Boltzmann model.

    PubMed Central

    Sharp, K A

    1998-01-01

    A description is given of a method to calculate the electron transfer reorganization energy (lambda) in proteins using the linear or nonlinear Poisson-Boltzmann (PB) equation. Finite difference solutions to the linear PB equation are then used to calculate lambda for intramolecular electron transfer reactions in the photosynthetic reaction center from Rhodopseudomonas viridis and the ruthenated heme proteins cytochrome c, myoglobin, and cytochrome b and for intermolecular electron transfer between two cytochrome c molecules. The overall agreement with experiment is good considering both the experimental and computational difficulties in estimating lambda. The calculations show that acceptor/donor separation and position of the cofactors with respect to the protein/solvent boundary are equally important and, along with the overall polarizability of the protein, are the major determinants of lambda. In agreement with previous studies, the calculations show that the protein provides a low reorganization environment for electron transfer. Agreement with experiment is best if the protein polarizability is modeled with a low (<8) average effective dielectric constant. The effect of buried waters on the reorganization energy of the photosynthetic reaction center was examined and found to make a contribution ranging from 0.05 eV to 0.27 eV, depending on the donor/acceptor pair. PMID:9512022

  9. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia

    PubMed Central

    Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.

    2012-01-01

    A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408

  10. Ionic screening of charged impurities in electrolytically gated graphene: A partially linearized Poisson-Boltzmann model.

    PubMed

    Sharma, P; Mišković, Z L

    2015-10-01

    We present a model describing the electrostatic interactions across a structure that consists of a single layer of graphene with large area, lying above an oxide substrate of finite thickness, with its surface exposed to a thick layer of liquid electrolyte containing salt ions. Our goal is to analyze the co-operative screening of the potential fluctuation in a doped graphene due to randomness in the positions of fixed charged impurities in the oxide by the charge carriers in graphene and by the mobile ions in the diffuse layer of the electrolyte. In order to account for a possibly large potential drop in the diffuse later that may arise in an electrolytically gated graphene, we use a partially linearized Poisson-Boltzmann (PB) model of the electrolyte, in which we solve a fully nonlinear PB equation for the surface average of the potential in one dimension, whereas the lateral fluctuations of the potential in graphene are tackled by linearizing the PB equation about the average potential. In this way, we are able to describe the regime of equilibrium doping of graphene to large densities for arbitrary values of the ion concentration without restrictions to the potential drop in the electrolyte. We evaluate the electrostatic Green's function for the partially linearized PB model, which is used to express the screening contributions of the graphene layer and the nearby electrolyte by means of an effective dielectric function. We find that, while the screened potential of a single charged impurity at large in-graphene distances exhibits a strong dependence on the ion concentration in the electrolyte and on the doping density in graphene, in the case of a spatially correlated two-dimensional ensemble of impurities, this dependence is largely suppressed in the autocovariance of the fluctuating potential. PMID:26450303

  11. A Poisson equation formulation for pressure calculations in penalty finite element models for viscous incompressible flows

    NASA Technical Reports Server (NTRS)

    Sohn, J. L.; Heinrich, J. C.

    1990-01-01

    The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.

  12. Poisson-Nernst-Planck-Fermi theory for modeling biological ion channels.

    PubMed

    Liu, Jinn-Liang; Eisenberg, Bob

    2014-12-14

    A Poisson-Nernst-Planck-Fermi (PNPF) theory is developed for studying ionic transport through biological ion channels. Our goal is to deal with the finite size of particle using a Fermi like distribution without calculating the forces between the particles, because they are both expensive and tricky to compute. We include the steric effect of ions and water molecules with nonuniform sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of water molecules in an inhomogeneous aqueous electrolyte. Including the finite volume of water and the voids between particles is an important new part of the theory presented here. Fermi like distributions of all particle species are derived from the volume exclusion of classical particles. Volume exclusion and the resulting saturation phenomena are especially important to describe the binding and permeation mechanisms of ions in a narrow channel pore. The Gibbs free energy of the Fermi distribution reduces to that of a Boltzmann distribution when these effects are not considered. The classical Gibbs entropy is extended to a new entropy form - called Gibbs-Fermi entropy - that describes mixing configurations of all finite size particles and voids in a thermodynamic system where microstates do not have equal probabilities. The PNPF model describes the dynamic flow of ions, water molecules, as well as voids with electric fields and protein charges. The model also provides a quantitative mean-field description of the charge/space competition mechanism of particles within the highly charged and crowded channel pore. The PNPF results are in good accord with experimental currents recorded in a 10(8)-fold range of Ca(2+) concentrations. The results illustrate the anomalous mole fraction effect, a signature of L-type calcium channels. Moreover, numerical results concerning water density, dielectric permittivity, void volume, and steric energy provide useful details to study

  13. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    PubMed Central

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  14. Relative age and birthplace effect in Japanese professional sports: a quantitative evaluation using a Bayesian hierarchical Poisson model.

    PubMed

    Ishigami, Hideaki

    2016-01-01

    Relative age effect (RAE) in sports has been well documented. Recent studies investigate the effect of birthplace in addition to the RAE. The first objective of this study was to show the magnitude of the RAE in two major professional sports in Japan, baseball and soccer. Second, we examined the birthplace effect and compared its magnitude with that of the RAE. The effect sizes were estimated using a Bayesian hierarchical Poisson model with the number of players as dependent variable. The RAEs were 9.0% and 7.7% per month for soccer and baseball, respectively. These estimates imply that children born in the first month of a school year have about three times greater chance of becoming a professional player than those born in the last month of the year. Over half of the difference in likelihoods of becoming a professional player between birthplaces was accounted for by weather conditions, with the likelihood decreasing by 1% per snow day. An effect of population size was not detected in the data. By investigating different samples, we demonstrated that using quarterly data leads to underestimation and that the age range of sampled athletes should be set carefully. PMID:25917193

  15. Numerical methods for a Poisson-Nernst-Planck-Fermi model of biological ion channels.

    PubMed

    Liu, Jinn-Liang; Eisenberg, Bob

    2015-07-01

    Numerical methods are proposed for an advanced Poisson-Nernst-Planck-Fermi (PNPF) model for studying ion transport through biological ion channels. PNPF contains many more correlations than most models and simulations of channels, because it includes water and calculates dielectric properties consistently as outputs. This model accounts for the steric effect of ions and water molecules with different sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of polarized water molecules in an inhomogeneous aqueous electrolyte. The steric energy is shown to be comparable to the electrical energy under physiological conditions, demonstrating the crucial role of the excluded volume of particles and the voids in the natural function of channel proteins. Water is shown to play a critical role in both correlation and steric effects in the model. We extend the classical Scharfetter-Gummel (SG) method for semiconductor devices to include the steric potential for ion channels, which is a fundamental physical property not present in semiconductors. Together with a simplified matched interface and boundary (SMIB) method for treating molecular surfaces and singular charges of channel proteins, the extended SG method is shown to exhibit important features in flow simulations such as optimal convergence, efficient nonlinear iterations, and physical conservation. The generalized SG stability condition shows why the standard discretization (without SG exponential fitting) of NP equations may fail and that divalent Ca(2+) may cause more unstable discrete Ca(2+) fluxes than that of monovalent Na(+). Two different methods-called the SMIB and multiscale methods-are proposed for two different types of channels, namely, the gramicidin A channel and an L-type calcium channel, depending on whether water is allowed to pass through the channel. Numerical methods are first validated with constructed models whose exact solutions are

  16. Numerical methods for a Poisson-Nernst-Planck-Fermi model of biological ion channels

    NASA Astrophysics Data System (ADS)

    Liu, Jinn-Liang; Eisenberg, Bob

    2015-07-01

    Numerical methods are proposed for an advanced Poisson-Nernst-Planck-Fermi (PNPF) model for studying ion transport through biological ion channels. PNPF contains many more correlations than most models and simulations of channels, because it includes water and calculates dielectric properties consistently as outputs. This model accounts for the steric effect of ions and water molecules with different sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of polarized water molecules in an inhomogeneous aqueous electrolyte. The steric energy is shown to be comparable to the electrical energy under physiological conditions, demonstrating the crucial role of the excluded volume of particles and the voids in the natural function of channel proteins. Water is shown to play a critical role in both correlation and steric effects in the model. We extend the classical Scharfetter-Gummel (SG) method for semiconductor devices to include the steric potential for ion channels, which is a fundamental physical property not present in semiconductors. Together with a simplified matched interface and boundary (SMIB) method for treating molecular surfaces and singular charges of channel proteins, the extended SG method is shown to exhibit important features in flow simulations such as optimal convergence, efficient nonlinear iterations, and physical conservation. The generalized SG stability condition shows why the standard discretization (without SG exponential fitting) of NP equations may fail and that divalent Ca2 + may cause more unstable discrete Ca2 + fluxes than that of monovalent Na+. Two different methods—called the SMIB and multiscale methods—are proposed for two different types of channels, namely, the gramicidin A channel and an L-type calcium channel, depending on whether water is allowed to pass through the channel. Numerical methods are first validated with constructed models whose exact solutions are

  17. Zero-inflated generalized Poisson regression mixture model for mapping quantitative trait loci underlying count trait with many zeros.

    PubMed

    Cui, Yuehua; Yang, Wenzhao

    2009-01-21

    Phenotypes measured in counts are commonly observed in nature. Statistical methods for mapping quantitative trait loci (QTL) underlying count traits are documented in the literature. The majority of them assume that the count phenotype follows a Poisson distribution with appropriate techniques being applied to handle data dispersion. When a count trait has a genetic basis, "naturally occurring" zero status also reflects the underlying gene effects. Simply ignoring or miss-handling the zero data may lead to wrong QTL inference. In this article, we propose an interval mapping approach for mapping QTL underlying count phenotypes containing many zeros. The effects of QTLs on the zero-inflated count trait are modelled through the zero-inflated generalized Poisson regression mixture model, which can handle the zero inflation and Poisson dispersion in the same distribution. We implement the approach using the EM algorithm with the Newton-Raphson algorithm embedded in the M-step, and provide a genome-wide scan for testing and estimating the QTL effects. The performance of the proposed method is evaluated through extensive simulation studies. Extensions to composite and multiple interval mapping are discussed. The utility of the developed approach is illustrated through a mouse F(2) intercross data set. Significant QTLs are detected to control mouse cholesterol gallstone formation. PMID:18977361

  18. Poisson-Nernst-Planck-Fermi theory for modeling biological ion channels

    SciTech Connect

    Liu, Jinn-Liang; Eisenberg, Bob

    2014-12-14

    A Poisson-Nernst-Planck-Fermi (PNPF) theory is developed for studying ionic transport through biological ion channels. Our goal is to deal with the finite size of particle using a Fermi like distribution without calculating the forces between the particles, because they are both expensive and tricky to compute. We include the steric effect of ions and water molecules with nonuniform sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of water molecules in an inhomogeneous aqueous electrolyte. Including the finite volume of water and the voids between particles is an important new part of the theory presented here. Fermi like distributions of all particle species are derived from the volume exclusion of classical particles. Volume exclusion and the resulting saturation phenomena are especially important to describe the binding and permeation mechanisms of ions in a narrow channel pore. The Gibbs free energy of the Fermi distribution reduces to that of a Boltzmann distribution when these effects are not considered. The classical Gibbs entropy is extended to a new entropy form — called Gibbs-Fermi entropy — that describes mixing configurations of all finite size particles and voids in a thermodynamic system where microstates do not have equal probabilities. The PNPF model describes the dynamic flow of ions, water molecules, as well as voids with electric fields and protein charges. The model also provides a quantitative mean-field description of the charge/space competition mechanism of particles within the highly charged and crowded channel pore. The PNPF results are in good accord with experimental currents recorded in a 10{sup 8}-fold range of Ca{sup 2+} concentrations. The results illustrate the anomalous mole fraction effect, a signature of L-type calcium channels. Moreover, numerical results concerning water density, dielectric permittivity, void volume, and steric energy provide useful

  19. Shear wave velocity and Poisson's ratio models across the southern Chile convergent margin at 38°15'S

    NASA Astrophysics Data System (ADS)

    Ramos, C.; Mechie, J.; Feng, M.

    2016-03-01

    Using active and passive seismology data we derive a shear (S) wave velocity model and a Poisson's ratio (σ) model across the Chilean convergent margin along a profile at 38°15'S, where the Mw 9.5 Valdivia earthquake occurred in 1960. The derived S-wave velocity model was constructed using three independently obtained velocity models that were merged together. In the upper part of the profile (0-2 km depth), controlled source data from explosions were used to obtain an S-wave traveltime tomogram. For the middle part (2-20 km depth), data from a temporary seismology array were used to carry out a dispersion analysis. The resulting dispersion curves were used to obtain a 3-D S-wave velocity model. In the lower part (20-75 km depth, depending on the longitude), an already existent local earthquake tomographic image was merged with the other two sections. This final S-wave velocity model and already existent compressional (P) wave velocity models along the same transect allowed us to obtain a Poisson's ratio model. The results of this study show that the velocities and Poisson's ratios in the continental crust of this part of the Chilean convergent margin are in agreement with geological features inferred from other studies and can be explained in terms of normal rock types. There is no requirement to call on the existence of measurable amounts of present-day fluids, in terms of seismic velocities, above the plate interface in the continental crust of the Coastal Cordillera and the Central Valley in this part of the Chilean convergent margin. This is in agreement with a recent model of water being transported down and released from the subduction zone.

  20. A stochastic model for the polygonal tundra based on Poisson-Voronoi Diagrams

    NASA Astrophysics Data System (ADS)

    Cresto Aleina, F.; Brovkin, V.; Muster, S.; Boike, J.; Kutzbach, L.; Sachs, T.; Zuyev, S.

    2012-12-01

    Sub-grid and small scale processes occur in various ecosystems and landscapes (e.g., periglacial ecosystems, peatlands and vegetation patterns). These local heterogeneities are often important or even fundamental to better understand general and large scale properties of the system, but they are either ignored or poorly parameterized in regional and global models. Because of their small scale, the underlying generating processes can be well explained and resolved only by local mechanistic models, which, on the other hand, fail to consider the regional or global influences of those features. A challenging problem is then how to deal with these interactions across different spatial scales, and how to improve our understanding of the role played by local soil heterogeneities in the climate system. This is of particular interest in the northern peatlands, because of the huge amount of carbon stored in these regions. Land-atmosphere greenhouse gas fluxes vary dramatically within these environments. Therefore, to correctly estimate the fluxes a description of the small scale soil variability is needed. Applications of statistical physics methods could be useful tools to upscale local features of the landscape, relating them to large-scale properties. To test this approach we considered a case study: the polygonal tundra. Cryogenic polygons, consisting mainly of elevated dry rims and wet low centers, pattern the terrain of many subartic regions and are generated by complex crack-and-growth processes. Methane, carbon dioxide and water vapor fluxes vary largely within the environment, as an effect of the small scale processes that characterize the landscape. It is then essential to consider the local heterogeneous behavior of the system components, such as the water table level inside the polygon wet centers, or the depth at which frozen soil thaws. We developed a stochastic model for this environment using Poisson-Voronoi diagrams, which is able to upscale statistical

  1. A global spectral element model for poisson equations and advective flow over a sphere

    NASA Astrophysics Data System (ADS)

    Mei, Huan; Wang, Faming; Zeng, Zhong; Qiu, Zhouhua; Yin, Linmao; Li, Liang

    2016-03-01

    A global spherical Fourier-Legendre spectral element method is proposed to solve Poisson equations and advective flow over a sphere. In the meridional direction, Legendre polynomials are used and the region is divided into several elements. In order to avoid coordinate singularities at the north and south poles in the meridional direction, Legendre-Gauss-Radau points are chosen at the elements involving the two poles. Fourier polynomials are applied in the zonal direction for its periodicity, with only one element. Then, the partial differential equations are solved on the longitude-latitude meshes without coordinate transformation between spherical and Cartesian coordinates. For verification of the proposed method, a few Poisson equations and advective flows are tested. Firstly, the method is found to be valid for test cases with smooth solution. The results of the Poisson equations demonstrate that the present method exhibits high accuracy and exponential convergence. Highprecision solutions are also obtained with near negligible numerical diffusion during the time evolution for advective flow with smooth shape. Secondly, the results of advective flow with non-smooth shape and deformational flow are also shown to be reasonable and effective. As a result, the present method is proved to be capable of solving flow through different types of elements, and thereby a desirable method with reliability and high accuracy for solving partial differential equations over a sphere.

  2. Boundary Lax pairs from non-ultra-local Poisson algebras

    SciTech Connect

    Avan, Jean; Doikou, Anastasia

    2009-11-15

    We consider non-ultra-local linear Poisson algebras on a continuous line. Suitable combinations of representations of these algebras yield representations of novel generalized linear Poisson algebras or 'boundary' extensions. They are parametrized by a boundary scalar matrix and depend, in addition, on the choice of an antiautomorphism. The new algebras are the classical-linear counterparts of the known quadratic quantum boundary algebras. For any choice of parameters, the non-ultra-local contribution of the original Poisson algebra disappears. We also systematically construct the associated classical Lax pair. The classical boundary principal chiral model is examined as a physical example.

  3. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive. PMID:26843936

  4. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

    PubMed

    Wang, Yiyi; Kockelman, Kara M

    2013-11-01

    This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. PMID:24036167

  5. Integrated analysis of transcriptomic and proteomic data of Desulfovibrio vulgaris: Zero-Inflated Poisson regression models to predict abundance of undetected proteins

    SciTech Connect

    Nie, Lei; Wu, Gang; Brockman, Fred J.; Zhang, Weiwen

    2006-05-04

    Abstract Advances in DNA microarray and proteomics technologies have enabled high-throughput measurement of mRNA expression and protein abundance. Parallel profiling of mRNA and protein on a global scale and integrative analysis of these two data types could provide additional insight into the metabolic mechanisms underlying complex biological systems. However, because protein abundance and mRNA expression are affected by many cellular and physical processes, there have been conflicting results on the correlation of these two measurements. In addition, as current proteomic methods can detect only a small fraction of proteins present in cells, no correlation study of these two data types has been done thus far at the whole-genome level. In this study, we describe a novel data-driven statistical model to integrate whole-genome microarray and proteomic data collected from Desulfovibrio vulgaris grown under three different conditions. Based on the Poisson distribution pattern of proteomic data and the fact that a large number of proteins were undetected (excess zeros), Zero-inflated Poisson models were used to define the correlation pattern of mRNA and protein abundance. The models assumed that there is a probability mass at zero representing some of the undetected proteins because of technical limitations. The models thus use abundance measurements of transcripts and proteins experimentally detected as input to generate predictions of protein abundances as output for all genes in the genome. We demonstrated the statistical models by comparatively analyzing D. vulgaris grown on lactate-based versus formate-based media. The increased expressions of Ech hydrogenase and alcohol dehydrogenase (Adh)-periplasmic Fe-only hydrogenase (Hyd) pathway for ATP synthesis were predicted for D. vulgaris grown on formate.

  6. Determination of Diffusion Coefficients in Cement-Based Materials: An Inverse Problem for the Nernst-Planck and Poisson Models

    NASA Astrophysics Data System (ADS)

    Szyszkiewicz-Warzecha, Krzysztof; Jasielec, Jerzy J.; Fausek, Janusz; Filipek, Robert

    2016-06-01

    Transport properties of ions have significant impact on the possibility of rebars corrosion thus the knowledge of a diffusion coefficient is important for reinforced concrete durability. Numerous tests for the determination of diffusion coefficients have been proposed but analysis of some of these tests show that they are too simplistic or even not valid. Hence, more rigorous models to calculate the coefficients should be employed. Here we propose the Nernst-Planck and Poisson equations, which take into account the concentration and electric potential field. Based on this model a special inverse method is presented for determination of a chloride diffusion coefficient. It requires the measurement of concentration profiles or flux on the boundary and solution of the NPP model to define the goal function. Finding the global minimum is equivalent to the determination of diffusion coefficients. Typical examples of the application of the presented method are given.

  7. Climatology of Station Storm Rainfall in the Continental United States: Parameters of the Bartlett-Lewis and Poisson Rectangular Pulses Models

    NASA Technical Reports Server (NTRS)

    Hawk, Kelly Lynn; Eagleson, Peter S.

    1992-01-01

    The parameters of two stochastic models of point rainfall, the Bartlett-Lewis model and the Poisson rectangular pulses model, are estimated for each month of the year from the historical records of hourly precipitation at more than seventy first-order stations in the continental United States. The parameters are presented both in tabular form and as isopleths on maps. The Poisson rectangular pulses parameters are useful in implementing models of the land surface water balance. The Bartlett-Lewis parameters are useful in disaggregating precipitation to a time period shorter than that of existing observations. Information is also included on a floppy disk.

  8. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model

    PubMed Central

    2013-01-01

    Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699

  9. Spatio-energetic cross-talks in photon counting detectors: detector model and correlated Poisson data generator

    NASA Astrophysics Data System (ADS)

    Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Kappler, Steffen

    2016-03-01

    An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the two pixels. This is called double-counting with charge sharing. The output of individual PCD pixel is Poisson distributed integer counts; however, the outputs of adjacent pixels are correlated due to double-counting. Major problems are the lack of detector noise model for the spatio-energetic crosstalk and the lack of an efficient simulation tool. Monte Carlo simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, we developed a new detector model and implemented into an efficient software simulator which uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account effects: (1) detection efficiency and incomplete charge collection; (2) photoelectric effect with total absorption; (3) photoelectric effect with fluorescence x-ray emission and re-absorption; (4) photoelectric effect with fluorescence x-ray emission which leaves PCD completely; and (5) electric noise. The model produced total detector spectrum similar to previous MC simulation data. The model can be used to predict spectrum and correlation with various different settings. The simulated noisy data demonstrated the expected performance: (a) data were integers; (b) the mean and covariance matrix was close to the target values; (c) noisy data generation was very efficient

  10. An improved dynamic Monte Carlo model coupled with Poisson equation to simulate the performance of organic photovoltaic devices

    NASA Astrophysics Data System (ADS)

    Meng, Lingyi; Wang, Dong; Li, Qikai; Yi, Yuanping; Brédas, Jean-Luc; Shuai, Zhigang

    2011-03-01

    We describe a new dynamic Monte Carlo model to simulate the operation of a polymer-blend solar cell; this model provides major improvements with respect to the one we developed earlier [J. Phys. Chem. B 114, 36 (2010)] by incorporating the Poisson equation and a charge thermoactivation mechanism. The advantage of the present approach is its capacity to deal with a nonuniform electrostatic potential that dynamically depends on the charge distribution. In this way, the unbalance in electron and hole mobilities and the space-charge induced potential distribution can be treated explicitly. Simulations reproduce well the experimental I-V curve in the dark and the open-circuit voltage under illumination of a polymer-blend solar cell. The dependence of the photovoltaic performance on the difference in electron and hole mobilities is discussed.

  11. Poisson structures for the Aristotelian model of three-body motion

    NASA Astrophysics Data System (ADS)

    Abadoğlu, E.; Gümral, H.

    2011-08-01

    We present explicitly Poisson structures of a dynamical system with three degrees of freedom introduced and studied by Calogero et al (2005 J. Phys. A: Math. Gen. 38 8873-96). We first show the construction of a formal Hamiltonian structure for a time-dependent Hamiltonian function. We then cast the dynamical equations into the form of a gradient flow by means of a potential function. By reducing the number of equations, we obtain the second time-independent Hamiltonian function which includes all parameters of the system. This extends the result of Calogero et al (2009 J. Phys. A: Math. Theor. 42 015205) for semi-symmetrical motion. We present bi-Hamiltonian structures for two special cases of the cited references. It turns out that the case of three bodies two of which are not interacting with each other but are coupled through the interaction of a third one requires a separate treatment. We conclude with a discussion on generic form of the second time-independent Hamiltonian function.

  12. Random transitions described by the stochastic Smoluchowski-Poisson system and by the stochastic Keller-Segel model.

    PubMed

    Chavanis, P H; Delfini, L

    2014-03-01

    We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation. PMID

  13. Marginal regression models for clustered count data based on zero-inflated Conway-Maxwell-Poisson distribution with applications.

    PubMed

    Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath

    2016-06-01

    Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. PMID:26575079

  14. A multivariate Poisson-lognormal regression model for prediction of crash counts by severity, using Bayesian methods.

    PubMed

    Ma, Jianming; Kockelman, Kara M; Damien, Paul

    2008-05-01

    Numerous efforts have been devoted to investigating crash occurrence as related to roadway design features, environmental factors and traffic conditions. However, most of the research has relied on univariate count models; that is, traffic crash counts at different levels of severity are estimated separately, which may neglect shared information in unobserved error terms, reduce efficiency in parameter estimates, and lead to potential biases in sample databases. This paper offers a multivariate Poisson-lognormal (MVPLN) specification that simultaneously models crash counts by injury severity. The MVPLN specification allows for a more general correlation structure as well as overdispersion. This approach addresses several questions that are difficult to answer when estimating crash counts separately. Thanks to recent advances in crash modeling and Bayesian statistics, parameter estimation is done within the Bayesian paradigm, using a Gibbs Sampler and the Metropolis-Hastings (M-H) algorithms for crashes on Washington State rural two-lane highways. Estimation results from the MVPLN approach show statistically significant correlations between crash counts at different levels of injury severity. The non-zero diagonal elements suggest overdispersion in crash counts at all levels of severity. The results lend themselves to several recommendations for highway safety treatments and design policies. For example, wide lanes and shoulders are key for reducing crash frequencies, as are longer vertical curves. PMID:18460364

  15. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population.

    PubMed

    Silva, Fabyano Fonseca; Tunin, Karen P; Rosa, Guilherme J M; da Silva, Marcos V B; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-10-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  16. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population

    PubMed Central

    Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-01-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  17. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  18. Poisson's spot with molecules

    SciTech Connect

    Reisinger, Thomas; Holst, Bodil; Patel, Amil A.; Smith, Henry I.; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo

    2009-05-15

    In the Poisson-spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. In this paper we report the observation of Poisson's spot using a beam of neutral deuterium molecules. The wavelength independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson's spot a promising candidate for applications ranging from the study of large molecule diffraction to patterning with molecules.

  19. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  20. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model

    PubMed Central

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-01-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543–2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic–Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. PMID:26977060

  1. Between algorithm and model: different Molecular Surface definitions for the Poisson-Boltzmann based electrostatic characterization of biomolecules in solution.

    PubMed

    Decherchi, Sergio; Colmenares, José; Catalano, Chiara Eva; Spagnuolo, Michela; Alexov, Emil; Rocchia, Walter

    2013-01-01

    The definition of a molecular surface which is physically sound and computationally efficient is a very interesting and long standing problem in the implicit solvent continuum modeling of biomolecular systems as well as in the molecular graphics field. In this work, two molecular surfaces are evaluated with respect to their suitability for electrostatic computation as alternatives to the widely used Connolly-Richards surface: the blobby surface, an implicit Gaussian atom centered surface, and the skin surface. As figures of merit, we considered surface differentiability and surface area continuity with respect to atom positions, and the agreement with explicit solvent simulations. Geometric analysis seems to privilege the skin to the blobby surface, and points to an unexpected relationship between the non connectedness of the surface, caused by interstices in the solute volume, and the surface area dependence on atomic centers. In order to assess the ability to reproduce explicit solvent results, specific software tools have been developed to enable the use of the skin surface in Poisson-Boltzmann calculations with the DelPhi solver. Results indicate that the skin and Connolly surfaces have a comparable performance from this last point of view. PMID:23519863

  2. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    PubMed

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. PMID:26977060

  3. Poisson's Spot with Molecules

    NASA Astrophysics Data System (ADS)

    Reisinger, Thomas; Patel, Amil; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo; Smith, Henry I.; Holst, Bodil

    2009-03-01

    In the Poisson-Spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. The Poisson spot is the last of the classical optics experiments to be realized with neutral matter waves. In this paper we report the observation of Poisson's Spot using a beam of neutral deuterium molecules. The wavelength-independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson's spot a promising candidate for applications ranging from the study of large-molecule diffraction and coherence in atom-lasers to patterning with large molecules.

  4. S-wave velocity and Poisson's ratio model in Southern Chile along a transect at 38°15'S from active and passive TIPTEQ data

    NASA Astrophysics Data System (ADS)

    Ramos, Catalina; Mechie, James

    2015-04-01

    Using active and passive seismology data from project TIPTEQ (from The Incoming Plate to mega-Thrust EarthQuake processes) we derive a shear (S) wave velocity and a Poisson's ratio (σ) model across the Chilean convergent margin along 38°15'S, where the Mw 9.5 Valdivia earthquake is believed to have occurred. The obtained S-wave velocity model consists of three different tomographic images that were merged together. In the upper part (0 - 5 km depth), controlled source data from explosions were used to obtain a S-wave travel-time tomography. In the middle part (5 - 20 km depth) a dispersion analysis and then a noise tomography were carried out in two different ways: one used the dispersion curves to obtain a 3D S-wave velocity model in one step and the other used the dispersion curves to obtain surface-wave velocity tomographic images for different periods and then used the surface-wave velocity values every 10 km along the profile to obtain 1D S-wave velocity profiles every 10 km that were then interpolated to obtain a 2D S-wave tomography. Both methods produce similar S-wave travel-times. In the lower part (20 - 75 km depth, depending on the longitude) an already existent S-wave velocity model from local earthquake tomography was merged with the other two sections. The final S-wave velocity model and already existent compressional (P) wave velocity models along the same transect allowed us to obtain a Poisson's ratio model. The results show that the velocities and Poisson's ratios in this part of the Chilean convergent margin can all be explained in terms of normal rock types. There is no requirement to call on the existence of significant amounts of present-day fluids in the continental lithosphere above the plate interface in this part of the Chilean convergent margin, to explain the derived velocities and Poisson's ratios.

  5. Detection of Gaussian signals in Poisson-modulated interference.

    PubMed

    Streit, R L

    2000-10-01

    Passive broadband detection of target signals by an array of hydrophones in the presence of multiple discrete interferers is analyzed under Gaussian statistics and low signal-to-noise ratio conditions. A nonhomogeneous Poisson-modulated interference process is used to model the ensemble of possible arrival directions of the discrete interferers. Closed-form expressions are derived for the recognition differential of the passive-sonar equation in the presence of Poisson-modulated interference. The interference-compensated recognition differential differs from the classical recognition differential by an additive positive term that depend on the interference-to-noise ratio, the directionality of the Poisson-modulated interference, and the array beam pattern. PMID:11051502

  6. Poisson's ratio model derived from P- and S-wave reflection seismic data at the CO2CRC Otway Project pilot site, Australia

    NASA Astrophysics Data System (ADS)

    Beilecke, Thies; Krawczyk, Charlotte M.; Tanner, David C.; Ziesch, Jennifer; Research Group Protect

    2014-05-01

    Compressional wave (P-wave) reflection seismic field measurements are a standard tool for subsurface exploration. 2-D seismic measurements are often used for overview measurements, but also as near-surface supplement to fill gaps that often exist in 3-D seismic data sets. Such supplementing 2-D measurements are typically simple with respect to field layout. This is an opportunity for the use of shear waves (S-waves). Within the last years, S-waves have become more and more important. One reason is that P- and S-waves are differently sensitive to fluids and pore fill so that the additional S-wave information can be used to enhance lithological studies. Another reason is that S-waves have the advantage of higher spatial resolution. Within the same signal bandwidth they typically have about half the wavelength of P-waves. In near-surface unconsolidated sediments they can even enhance the structural resolution by one order of magnitude. We make use of these capabilities within the PROTECT project. In addition to already existing 2-D P-wave data, we carried out a near surface 2-D S-wave field survey at the CO2CRC Otway Project pilot site, close to Warrnambool, Australia in November 2013. The combined analysis of P-wave and S-wave data is used to construct a Poisson's Ratio 2-D model down to roughly 600 m depth. The Poisson's ratio values along a 1 km long profile at the site are surprisingly high, ranging from 0.47 in the carbonate-dominated near surface to 0.4 at depth. In the literature, average lab measurements of 0.22 for unfissured carbonates and 0.37 for fissured examples have been reported. The high values that we found may indicate areas of rather unconsolidated or fractured material, or enhanced fluid contents, and will be subject of further studies. This work is integrated in a larger workflow towards prediction of CO2 leakage and monitoring strategies for subsurface storage in general. Acknowledgement: This work was sponsored in part by the Australian

  7. Adapting Poisson-Boltzmann to the self-consistent mean field theory: Application to protein side-chain modeling

    NASA Astrophysics Data System (ADS)

    Koehl, Patrice; Orland, Henri; Delarue, Marc

    2011-08-01

    We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.

  8. Cumulative Poisson Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  9. Effect of Nutritional Habits on Dental Caries in Permanent Dentition among Schoolchildren Aged 10–12 Years: A Zero-Inflated Generalized Poisson Regression Model Approach

    PubMed Central

    ALMASI, Afshin; RAHIMIFOROUSHANI, Abbas; ESHRAGHIAN, Mohammad Reza; MOHAMMAD, Kazem; PASDAR, Yahya; TARRAHI, Mohammad Javad; MOGHIMBEIGI, Abbas; AHMADI JOUYBARI, Touraj

    2016-01-01

    Background: The aim of this study was to assess the associations between nutrition and dental caries in permanent dentition among schoolchildren. Methods: A cross-sectional survey was undertaken on 698 schoolchildren aged 10 to 12 yr from a random sample of primary schools in Kermanshah, western Iran, in 2014. The study was based on the data obtained from the questionnaire containing information on nutritional habits and the outcome of decayed/missing/filled teeth (DMFT) index. The association between predictors and dental caries was modeled using the Zero Inflated Generalized Poisson (ZIGP) regression model. Results: Fourteen percent of the children were caries free. The model was shown that in female children, the odds of being in a caries susceptible sub-group was 1.23 (95% CI: 1.08–1.51) times more likely than boys (P=0.041). Additionally, mean caries count in children who consumed the fizzy soft beverages and sweet biscuits more than once daily was 1.41 (95% CI: 1.19–1.63) and 1.27 (95% CI: 1.18–1.37) times more than children that were in category of less than 3 times a week or never, respectively. Conclusions: Girls were at a higher risk of caries than boys were. Since our study showed that nutritional status may have significant effect on caries in permanent teeth, we recommend that health promotion activities in school should be emphasized on healthful eating practices; especially limiting beverages containing sugar to only occasionally between meals. PMID:27141498

  10. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum

  11. Essential Variational Poisson Cohomology

    NASA Astrophysics Data System (ADS)

    De Sole, Alberto; Kac, Victor G.

    2012-08-01

    In our recent paper "The variational Poisson cohomology" (2011) we computed the dimension of the variational Poisson cohomology {{{H}^bullet_K({V})}} for any quasiconstant coefficient ℓ × ℓ matrix differential operator K of order N with invertible leading coefficient, provided that {{{V}}} is a normal algebra of differential functions over a linearly closed differential field. In the present paper we show that, for K skewadjoint, the {{{Z}}} -graded Lie superalgebra {{{H}^bullet_K({V})}} is isomorphic to the finite dimensional Lie superalgebra {{widetilde{H}(Nell,S)}} . We also prove that the subalgebra of "essential" variational Poisson cohomology, consisting of classes vanishing on the Casimirs of K, is zero. This vanishing result has applications to the theory of bi-Hamiltonian structures and their deformations. At the end of the paper we consider also the translation invariant case.

  12. Poisson's spot with molecules

    NASA Astrophysics Data System (ADS)

    Reisinger, Thomas; Patel, Amil A.; Reingruber, Herbert; Fladischer, Katrin; Ernst, Wolfgang E.; Bracco, Gianangelo; Smith, Henry I.; Holst, Bodil

    2009-05-01

    In the Poisson-spot experiment, waves emanating from a source are blocked by a circular obstacle. Due to their positive on-axis interference an image of the source (the Poisson spot) is observed within the geometrical shadow of the obstacle. In this paper we report the observation of Poisson’s spot using a beam of neutral deuterium molecules. The wavelength independence and the weak constraints on angular alignment and position of the circular obstacle make Poisson’s spot a promising candidate for applications ranging from the study of large molecule diffraction to patterning with molecules.

  13. Poisson Regression Analysis of Illness and Injury Surveillance Data

    SciTech Connect

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson

  14. Modeling and analysis of surface potential of single gate fully depleted SOI MOSFET using 2D-Poisson's equation

    NASA Astrophysics Data System (ADS)

    Mani, Prashant; Tyagi, Chandra Shekhar; Srivastav, Nishant

    2016-03-01

    In this paper the analytical solution of the 2D Poisson's equation for single gate Fully Depleted SOI (FDSOI) MOSFET's is derived by using a Green's function solution technique. The surface potential is calculated and the threshold voltage of the device is minimized for the low power consumption. Due to minimization of threshold voltage the short channel effect of device is suppressed and after observation we obtain the device is kink free. The structure and characteristics of SingleGate FDSOI MOSFET were matched by using MathCAD and silvaco respectively.

  15. New generalized poisson mixture model for bimodal count data with drug effect: An application to rodent brief-access taste aversion experiments.

    PubMed

    Sheng, Y; Soto, J; Orlu Gul, M; Cortina-Borja, M; Tuleu, C; Standing, J F

    2016-08-01

    Pharmacodynamic (PD) count data can exhibit bimodality and nonequidispersion complicating the inclusion of drug effect. The purpose of this study was to explore four different mixture distribution models for bimodal count data by including both drug effect and distribution truncation. An example dataset, which exhibited bimodal pattern, was from rodent brief-access taste aversion (BATA) experiments to assess the bitterness of ascending concentrations of an aversive tasting drug. The two generalized Poisson mixture models performed the best and was flexible to explain both under and overdispersion. A sigmoid maximum effect (Emax ) model with logistic transformation was introduced to link the drug effect to the data partition within each distribution. Predicted density-histogram plot is suggested as a model evaluation tool due to its capability to directly compare the model predicted density with the histogram from raw data. The modeling approach presented here could form a useful strategy for modeling similar count data types. PMID:27472892

  16. New generalized poisson mixture model for bimodal count data with drug effect: An application to rodent brief‐access taste aversion experiments

    PubMed Central

    Soto, J; Orlu Gul, M; Cortina‐Borja, M; Tuleu, C; Standing, JF

    2016-01-01

    Pharmacodynamic (PD) count data can exhibit bimodality and nonequidispersion complicating the inclusion of drug effect. The purpose of this study was to explore four different mixture distribution models for bimodal count data by including both drug effect and distribution truncation. An example dataset, which exhibited bimodal pattern, was from rodent brief‐access taste aversion (BATA) experiments to assess the bitterness of ascending concentrations of an aversive tasting drug. The two generalized Poisson mixture models performed the best and was flexible to explain both under and overdispersion. A sigmoid maximum effect (Emax) model with logistic transformation was introduced to link the drug effect to the data partition within each distribution. Predicted density‐histogram plot is suggested as a model evaluation tool due to its capability to directly compare the model predicted density with the histogram from raw data. The modeling approach presented here could form a useful strategy for modeling similar count data types. PMID:27472892

  17. Multiphase semiclassical approximation of an electron in a one-dimensional crystalline lattice - III. From ab initio models to WKB for Schroedinger-Poisson

    SciTech Connect

    Gosse, Laurent . E-mail: mauser@univie.ac.at

    2006-01-01

    This work is concerned with the semiclassical approximation of the Schroedinger-Poisson equation modeling ballistic transport in a 1D periodic potential by means of WKB techniques. It is derived by considering the mean-field limit of a N-body quantum problem, then K-multivalued solutions are adapted to the treatment of this weakly nonlinear system obtained after homogenization without taking into account for Pauli's exclusion principle. Numerical experiments display the behaviour of self-consistent wave packets and screening effects.

  18. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  19. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  20. CREATION OF THE MODEL ADDITIONAL PROTOCOL

    SciTech Connect

    Houck, F.; Rosenthal, M.; Wulf, N.

    2010-05-25

    In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.

  1. Detecting contaminated birthdates using generalized additive models

    PubMed Central

    2014-01-01

    Background Erroneous patient birthdates are common in health databases. Detection of these errors usually involves manual verification, which can be resource intensive and impractical. By identifying a frequent manifestation of birthdate errors, this paper presents a principled and statistically driven procedure to identify erroneous patient birthdates. Results Generalized additive models (GAM) enabled explicit incorporation of known demographic trends and birth patterns. With false positive rates controlled, the method identified birthdate contamination with high accuracy. In the health data set used, of the 58 actual incorrect birthdates manually identified by the domain expert, the GAM-based method identified 51, with 8 false positives (resulting in a positive predictive value of 86.0% (51/59) and a false negative rate of 12.0% (7/58)). These results outperformed linear time-series models. Conclusions The GAM-based method is an effective approach to identify systemic birthdate errors, a common data quality issue in both clinical and administrative databases, with high accuracy. PMID:24923281

  2. Lightning Climatology with a Generalized Additive Model

    NASA Astrophysics Data System (ADS)

    Simon, Thorsten; Mayr, Georg; Umlauf, Nikolaus; Zeileis, Achim

    2016-04-01

    This study present a lightning climatology on a 1km x 1km grid estimated via generalized additive models (GAM). GAMs provide a framework to account for non-linear effects in time and space and for non-linear spatial-temporal interaction terms simultaneously. The degrees of smoothness of the non-linear effects is selected automatically in our approach. Furthermore, the influence of topography is captured in the model by including a non-linear term. To illustrate our approach we use lightning data from the ALDIS networks and selected a region in Southeastern Austria, where complex terrain extends from 200 an 3800 m asl and summertime lightning activity is high compared to other parts of the Eastern Alps. The temporal effect in the GAM shows a rapid increase in lightning activity in early July and a slow decay in activity afterwards. The estimated spatial effect is not very smooth and requires approximately 225 effective degrees of freedom. It reveals that lightning is more likely in the Eastern and Southern part of the region of interest. This spatial effect only accounts for variability not already explained by the topography. The topography effect shows lightning to be more likely at higher altitudes. The effect describing the spatio-temporal interactions takes approximately 200 degrees of freedom, and reveals local deviations of the climatology.

  3. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  4. Effects of various boundary conditions on the response of Poisson-Nernst-Planck impedance spectroscopy analysis models and comparison with a continuous-time random-walk model.

    PubMed

    Macdonald, J Ross

    2011-11-24

    Various electrode reaction rate boundary conditions suitable for mean-field Poisson-Nernst-Planck (PNP) mobile charge frequency response continuum models are defined and incorporated in the resulting Chang-Jaffe (CJ) CJPNP model, the ohmic OHPNP one, and a simplified GPNP one in order to generalize from full to partial blocking of mobile charges at the two plane parallel electrodes. Model responses using exact synthetic PNP data involving only mobile negative charges are discussed and compared for a wide range of CJ dimensionless reaction rate values. The CJPNP and OHPNP ones are shown to be fully equivalent, except possibly for the analysis of nanomaterial structures. The dielectric strengths associated with the CJPNP diffuse double layers at the electrodes were found to decrease toward 0 as the reaction rate increased, consistent with fewer blocked charges and more reacting ones. Parameter estimates from GPNP fits of CJPNP data were shown to lead to accurate calculated values of the CJ reaction rate and of some other CJPNP parameters. Best fits of CaCu(3)Ti(4)O(12) (CCTO) single-crystal data, an electronic conductor, at 80 and 140 K, required the anomalous diffusion model, CJPNPA, and led to medium-size rate estimates of about 0.12 and 0.03, respectively, as well as good estimates of the values of other important CJPNPA parameters such as the independently verified concentration of neutral dissociable centers. These continuum-fit results were found to be only somewhat comparable to those obtained from a composite continuous-time random-walk hopping/trapping semiuniversal UN model. PMID:21923111

  5. Prediction of accrual closure date in multi-center clinical trials with discrete-time Poisson process models

    PubMed Central

    Tang, Gong; Kong, Yuan; Chang, Chung-Chou Ho; Kong, Lan; Costantino, Joseph P.

    2016-01-01

    In a phase III multi-center cancer clinical trial or large public health studies, sample size is predetermined to achieve desired power and study participants are enrolled from tens or hundreds of participating institutions. As the accrual is closing to the target size, the coordinating data center needs to project the accrual closure date based on the observed accrual pattern and notify the participating sites several weeks in advance. In the past, projections were simply based on some crude assessment and conservative measures were incorporated in order to achieve the target accrual size. This approach often resulted in excessive accrual size and subsequently unnecessary financial burden on the study sponsors. Here we proposed a discrete-time Poisson process-based method to estimate the accrual rate at time of projection and subsequently the trial closure date. To ensure that target size would be reached with high confidence, we also proposed a conservative method for the closure date projection. The proposed method was illustrated through the analysis of the accrual data of NSABP trial B-38. The results showed that application of proposed method could help to save considerable amount of expenditure in patient management without compromising the accrual goal in multi-center clinical trials. PMID:22411544

  6. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    SciTech Connect

    Sun Wei; Zeng Yong; Zhang Shu

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical scheme based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.

  7. Poisson's ratio of high-performance concrete

    SciTech Connect

    Persson, B.

    1999-10-01

    This article outlines an experimental and numerical study on Poisson's ratio of high-performance concrete subjected to air or sealed curing. Eight qualities of concrete (about 100 cylinders and 900 cubes) were studied, both young and in the mature state. The concretes contained between 5 and 10% silica fume, and two concretes in addition contained air-entrainment. Parallel studies of strength and internal relative humidity were carried out. The results indicate that Poisson's ratio of high-performance concrete is slightly smaller than that of normal-strength concrete. Analyses of the influence of maturity, type of aggregate, and moisture on Poisson's ratio are also presented. The project was carried out from 1991 to 1998.

  8. On the Burgers-Poisson equation

    NASA Astrophysics Data System (ADS)

    Grunert, K.; Nguyen, Khai T.

    2016-09-01

    In this paper, we prove the existence and uniqueness of weak entropy solutions to the Burgers-Poisson equation for initial data in L1 (R). In addition an Oleinik type estimate is established and some criteria on local smoothness and wave breaking for weak entropy solutions are provided.

  9. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296

  10. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-01

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes. PMID:26747797

  11. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  12. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was

  13. Fractal Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-09-01

    The Central Limit Theorem (CLT) and Extreme Value Theory (EVT) study, respectively, the stochastic limit-laws of sums and maxima of sequences of independent and identically distributed (i.i.d.) random variables via an affine scaling scheme. In this research we study the stochastic limit-laws of populations of i.i.d. random variables via nonlinear scaling schemes. The stochastic population-limits obtained are fractal Poisson processes which are statistically self-similar with respect to the scaling scheme applied, and which are characterized by two elemental structures: (i) a universal power-law structure common to all limits, and independent of the scaling scheme applied; (ii) a specific structure contingent on the scaling scheme applied. The sum-projection and the maximum-projection of the population-limits obtained are generalizations of the classic CLT and EVT results - extending them from affine to general nonlinear scaling schemes.

  14. Graphical user interface for AMOS and POISSON

    SciTech Connect

    Swatloski, T.L.

    1993-03-02

    A graphical user interface (GUI) exists for building model geometry for the time-domain field code, AMOS. This GUI has recently been modified to build models and display the results of the Poisson electrostatic solver maintained by the Los Alamos Accelerator Code Group called POISSON. Included in the GUI is a 2-D graphic editor allowing interactive construction of the model geometry. Polygons may be created by entering points with the mouse, with text input, or by reading coordinates from a file. Circular arcs have recently been added. Once polygons are entered, points may be inserted, moved, or deleted. Materials can be assigned to polygons, and are represented by different colors. The unit scale may be adjusted as well as the viewport. A rectangular mesh may be generated for AMOS or a triangular mesh for POISSON. Potentials from POISSON are represented with a contour plot and the designer is able to mouse click anywhere on the model to display the potential value at that location. This was developed under the X windowing system using the Motif look and feel.

  15. Short-Term Effects of Climatic Variables on Hand, Foot, and Mouth Disease in Mainland China, 2008–2013: A Multilevel Spatial Poisson Regression Model Accounting for Overdispersion

    PubMed Central

    Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying

    2016-01-01

    Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic

  16. Solves Poisson's Equation in Axizymmetric Geometry on a Rectangular Mesh

    1996-09-10

    DATHETA4.0 computes the magnetostatic field produced by multiple point current sources in the presence of perfect conductors in axisymmetric geometry. DATHETA4.0 has an interactive user interface and solves Poisson''s equation using the ADI method on a rectangular finite-difference mesh. DATHETA4.0 uncludes models specific to applied-B ion diodes.

  17. Modeling techniques for gaining additional urban space

    NASA Astrophysics Data System (ADS)

    Thunig, Holger; Naumann, Simone; Siegmund, Alexander

    2009-09-01

    One of the major accompaniments of the globalization is the rapid growing of urban areas. Urban sprawl is the main environmental problem affecting those cities across different characteristics and continents. Various reasons for the increase in urban sprawl in the last 10 to 30 years have been proposed [1], and often depend on the socio-economic situation of cities. The quantitative reduction and the sustainable handling of land should be performed by inner urban development instead of expanding urban regions. Following the principal "spare the urban fringe, develop the inner suburbs first" requires differentiated tools allowing for quantitative and qualitative appraisals of current building potentials. Using spatial high resolution remote sensing data within an object-based approach enables the detection of potential areas while GIS-data provides information for the quantitative valuation. This paper presents techniques for modeling urban environment and opportunities of utilization of the retrieved information for urban planners and their special needs.

  18. Supervised Gamma Process Poisson Factorization

    SciTech Connect

    Anderson, Dylan Zachary

    2015-05-01

    This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling and several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.

  19. Poisson Spot with Magnetic Levitation

    ERIC Educational Resources Information Center

    Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

    2010-01-01

    In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

  20. Poisson spot with magnetic levitation

    NASA Astrophysics Data System (ADS)

    Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

    2010-02-01

    In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

  1. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  2. Poisson-Boltzmann model for protein-surface electrostatic interactions and grid-convergence study using the PyGBe code

    NASA Astrophysics Data System (ADS)

    Cooper, Christopher D.; Barba, Lorena A.

    2016-05-01

    Interactions between surfaces and proteins occur in many vital processes and are crucial in biotechnology: the ability to control specific interactions is essential in fields like biomaterials, biomedical implants and biosensors. In the latter case, biosensor sensitivity hinges on ligand proteins adsorbing on bioactive surfaces with a favorable orientation, exposing reaction sites to target molecules. Protein adsorption, being a free-energy-driven process, is difficult to study experimentally. This paper develops and evaluates a computational model to study electrostatic interactions of proteins and charged nanosurfaces, via the Poisson-Boltzmann equation. We extended the implicit-solvent model used in the open-source code PyGBe to include surfaces of imposed charge or potential. This code solves the boundary integral formulation of the Poisson-Boltzmann equation, discretized with surface elements. PyGBe has at its core a treecode-accelerated Krylov iterative solver, resulting in O(N log N) scaling, with further acceleration on hardware via multi-threaded execution on GPUs. It computes solvation and surface free energies, providing a framework for studying the effect of electrostatics on adsorption. We derived an analytical solution for a spherical charged surface interacting with a spherical dielectric cavity, and used it in a grid-convergence study to build evidence on the correctness of our approach. The study showed the error decaying with the average area of the boundary elements, i.e., the method is O(1 / N) , which is consistent with our previous verification studies using PyGBe. We also studied grid-convergence using a real molecular geometry (protein G B1 D4‧), in this case using Richardson extrapolation (in the absence of an analytical solution) and confirmed the O(1 / N) scaling. With this work, we can now access a completely new family of problems, which no other major bioelectrostatics solver, e.g. APBS, is capable of dealing with. PyGBe is open

  3. Poisson-Boltzmann model for protein-surface electrostatic interactions and grid-convergence study using the PyGBe code

    NASA Astrophysics Data System (ADS)

    Cooper, Christopher D.; Barba, Lorena A.

    2016-05-01

    Interactions between surfaces and proteins occur in many vital processes and are crucial in biotechnology: the ability to control specific interactions is essential in fields like biomaterials, biomedical implants and biosensors. In the latter case, biosensor sensitivity hinges on ligand proteins adsorbing on bioactive surfaces with a favorable orientation, exposing reaction sites to target molecules. Protein adsorption, being a free-energy-driven process, is difficult to study experimentally. This paper develops and evaluates a computational model to study electrostatic interactions of proteins and charged nanosurfaces, via the Poisson-Boltzmann equation. We extended the implicit-solvent model used in the open-source code PyGBe to include surfaces of imposed charge or potential. This code solves the boundary integral formulation of the Poisson-Boltzmann equation, discretized with surface elements. PyGBe has at its core a treecode-accelerated Krylov iterative solver, resulting in O(N log N) scaling, with further acceleration on hardware via multi-threaded execution on GPUs. It computes solvation and surface free energies, providing a framework for studying the effect of electrostatics on adsorption. We derived an analytical solution for a spherical charged surface interacting with a spherical dielectric cavity, and used it in a grid-convergence study to build evidence on the correctness of our approach. The study showed the error decaying with the average area of the boundary elements, i.e., the method is O(1 / N) , which is consistent with our previous verification studies using PyGBe. We also studied grid-convergence using a real molecular geometry (protein G B1 D4‧), in this case using Richardson extrapolation (in the absence of an analytical solution) and confirmed the O(1 / N) scaling. With this work, we can now access a completely new family of problems, which no other major bioelectrostatics solver, e.g. APBS, is capable of dealing with. PyGBe is open

  4. Solution of Poisson's equation in a volume conductor using resistor mesh models: Application to event related potential imaging

    NASA Astrophysics Data System (ADS)

    Franceries, X.; Doyon, B.; Chauveau, N.; Rigaud, B.; Celsis, P.; Morucci, J.-P.

    2003-03-01

    In electroencephalography (EEG) and event related potentials (ERP), localizing the electrical sources at the origin of scalp potentials (inverse problem) imposes, in a first step, the computation of scalp potential distribution from the simulation of sources (forward problem). This article proposes an alternative method for mimicing both the electrical and geometrical properties of the head, including brain, skull, and scalp tissue with resistors. Two resistor mesh models have been designed to reproduce the three-sphere reference model (analytical model). The first one (spherical resistor mesh) closely mimics the geometrical and electrical properties of the analytical model. The second one (cubic resistor mesh) is designed to conveniently handle anatomical data from magnetic resonance imaging. Both models have been validated, in reference to the analytical solution calculated on the three-sphere model, by computing the magnification factor and the relative difference measure. Results suggest that the mesh models can be used as robust and user-friendly simulation or exploration tools in EEG/ERP.

  5. The transport exponent in percolation models with additional loops

    NASA Astrophysics Data System (ADS)

    Babalievski, F.

    1994-10-01

    Several percolation models with additional loops were studied. The transport exponents for these models were estimated numerically by means of a transfer-matrix approach. It was found that the transport exponent has a drastically changed value for some of the models. This result supports some previous numerical studies on the vibrational properties of similar models (with additional loops).

  6. Resources allocation in healthcare for cancer: a case study using generalised additive mixed models.

    PubMed

    Musio, Monica; Sauleau, Erik A; Augustin, Nicole H

    2012-11-01

    Our aim is to develop a method for helping resources re-allocation in healthcare linked to cancer, in order to replan the allocation of providers. Ageing of the population has a considerable impact on the use of health resources because aged people require more specialised medical care due notably to cancer. We propose a method useful to monitor changes of cancer incidence in space and time taking into account two age categories, according to healthcar general organisation. We use generalised additive mixed models with a Poisson response, according to the methodology presented in Wood, Generalised additive models: an introduction with R. Chapman and Hall/CRC, 2006. Besides one-dimensional smooth functions accounting for non-linear effects of covariates, the space-time interaction can be modelled using scale invariant smoothers. Incidence data collected by a general cancer registry between 1992 and 2007 in a specific area of France is studied. Our best model exhibits a strong increase of the incidence of cancer along time and an obvious spatial pattern for people more than 70 years with a higher incidence in the central band of the region. This is a strong argument for re-allocating resources for old people cancer care in this sub-region. PMID:23242683

  7. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  8. Efficient self-consistent Schrödinger–Poisson-rate equation iteration method for the modeling of strained quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Li, Jian; Ma, Xunpeng; Wei, Xin; Jiang, Yu; Fu, Dong; Wu, Haoyue; Song, Guofeng; Chen, Lianghui

    2016-05-01

    We present an efficient method for the calculation of the transmission characteristic of quantum cascade lasers (QCLs). A fully Schrödinger–Poisson-rate equation iteration with strained term is presented in our calculation. The two-band strained term of the Schrödinger equation is derived from the eight-band Hamiltonian. The equivalent strain energy that affects the effective mass and raises the energy level is introduced to include the biaxial strain into the conduction band profile. We simplified the model of the electron–electron scattering process and improved the calculation efficiency by about two orders of magnitude. The thermobackfilling effect is optimized by replacing the lattice temperature with the electron temperature. The quasi-subband-Fermi level is used to calculate the electron density of laser subbands. Compared with the experiment results, our method gives reasonable threshold current (depends on the assumption of waveguide loss and scattering processes) and more accurate wavelength, making the method efficient and practical for QCL simulations.

  9. Simulations of Cyclic Voltammetry for Electric Double Layers in Asymmetric Electrolytes: A Generalized Modified Poisson-Nernst-Planck Model

    SciTech Connect

    Wang, Hainan; Thiele, Alexander; Pilon, Laurent

    2013-11-15

    This paper presents a generalized modified Poisson–Nernst–Planck (MPNP) model derived from first principles based on excess chemical potential and Langmuir activity coefficient to simulate electric double-layer dynamics in asymmetric electrolytes. The model accounts simultaneously for (1) asymmetric electrolytes with (2) multiple ion species, (3) finite ion sizes, and (4) Stern and diffuse layers along with Ohmic potential drop in the electrode. It was used to simulate cyclic voltammetry (CV) measurements for binary asymmetric electrolytes. The results demonstrated that the current density increased significantly with decreasing ion diameter and/or increasing valency |zi| of either ion species. By contrast, the ion diffusion coefficients affected the CV curves and capacitance only at large scan rates. Dimensional analysis was also performed, and 11 dimensionless numbers were identified to govern the CV measurements of the electric double layer in binary asymmetric electrolytes between two identical planar electrodes of finite thickness. A self-similar behavior was identified for the electric double-layer integral capacitance estimated from CV measurement simulations. Two regimes were identified by comparing the half cycle period τCV and the “RC time scale” τRC corresponding to the characteristic time of ions’ electrodiffusion. For τRC ← τCV, quasi-equilibrium conditions prevailed and the capacitance was diffusion-independent while for τRC → τCV, the capacitance was diffusion-limited. The effect of the electrode was captured by the dimensionless electrode electrical conductivity representing the ratio of characteristic times associated with charge transport in the electrolyte and that in the electrode. The model developed here will be useful for simulating and designing various practical electrochemical, colloidal, and biological systems for a wide range of applications.

  10. On classification of discrete, scalar-valued Poisson brackets

    NASA Astrophysics Data System (ADS)

    Parodi, E.

    2012-10-01

    We address the problem of classifying discrete differential-geometric Poisson brackets (dDGPBs) of any fixed order on a target space of dimension 1. We prove that these Poisson brackets (PBs) are in one-to-one correspondence with the intersection points of certain projective hypersurfaces. In addition, they can be reduced to a cubic PB of the standard Volterra lattice by discrete Miura-type transformations. Finally, by improving a lattice consolidation procedure, we obtain new families of non-degenerate, vector-valued and first-order dDGPBs that can be considered in the framework of admissible Lie-Poisson group theory.

  11. Background stratified Poisson regression analysis of cohort data

    PubMed Central

    Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as ‘nuisance’ variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this ‘conditional’ regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911

  12. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911

  13. Criteria for deviation from predictions by the concentration addition model.

    PubMed

    Takeshita, Jun-Ichi; Seki, Masanori; Kamo, Masashi

    2016-07-01

    Loewe's additivity (concentration addition) is a well-known model for predicting the toxic effects of chemical mixtures under the additivity assumption of toxicity. However, from the perspective of chemical risk assessment and/or management, it is important to identify chemicals whose toxicities are additive when present concurrently, that is, it should be established whether there are chemical mixtures to which the concentration addition predictive model can be applied. The objective of the present study was to develop criteria for judging test results that deviated from the predictions by the concentration addition chemical mixture model. These criteria were based on the confidence interval of the concentration addition model's prediction and on estimation of errors of the predicted concentration-effect curves by toxicity tests after exposure to single chemicals. A log-logit model with 2 parameters was assumed for the concentration-effect curve of each individual chemical. These parameters were determined by the maximum-likelihood method, and the criteria were defined using the variances and the covariance of the parameters. In addition, the criteria were applied to a toxicity test of a binary mixture of p-n-nonylphenol and p-n-octylphenol using the Japanese killifish, medaka (Oryzias latipes). Consequently, the concentration addition model using confidence interval was capable of predicting the test results at any level, and no reason for rejecting the concentration addition was found. Environ Toxicol Chem 2016;35:1806-1814. © 2015 SETAC. PMID:26660330

  14. Lamb wave propagation in negative Poisson's ratio composites

    NASA Astrophysics Data System (ADS)

    Remillat, Chrystel; Wilcox, Paul; Scarpa, Fabrizio

    2008-03-01

    Lamb wave propagation is evaluated for cross-ply laminate composites exhibiting through-the-thickness negative Poisson's ratio. The laminates are mechanically modeled using the Classical Laminate Theory, while the propagation of Lamb waves is investigated using a combination of semi analytical models and Finite Element time-stepping techniques. The auxetic laminates exhibit well spaced bending, shear and symmetric fundamental modes, while featuring normal stresses for A 0 mode 3 times lower than composite laminates with positive Poisson's ratio.

  15. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson-Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  16. Periodicity characterization of orbital prediction error and Poisson series fitting

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Zong; Chen, Lei; Tang, Guo-Jin

    2012-09-01

    Publicly available Two-Line Element Sets (TLE) contains no associated error or accuracy information. The historical-data-based method is a feasible choice for those objects only TLE data are available. Most of current TLE error analysis methods use polynomial fitting which cannot represent the periodic characteristics. This paper has presented a methodology for periodicity characterization and Poisson series fitting for orbital prediction error based on historical orbital data. As error-fitting function, the Poisson series can describe variation of error with respect to propagation duration and on-orbit position of objects. The Poisson coefficient matrices of each error components are fitted using least squares method. Effects of polynomial terms, trigonometric terms, and mixed terms of Poisson series are discussed. Substituting time difference and mean anomaly into the Poisson series one can obtain the error information at specific time. Four satellites (Cosmos-2251, GPS-62, SLOSHSAT, TelStar-10) from four orbital type (LEO, MEO, HEO, GEO, respectively) were selected as examples to demonstrate and validate the method. The results indicated that the periodic characteristics exist in all three components of four objects, especially HEO and MEO. The periodicity characterization and Poisson series fitting could improve accuracy of the orbit covariance information. The Poisson series is a common form for describing orbital prediction error, the commonly used polynomial fitting is a special case of the Poisson series fitting. The Poisson coefficient matrices can be obtained before close approach analysis. This method does not require any knowledge about how the state vectors are generated, so it can handle not only TLE data but also other orbit models and elements.

  17. Analysis of Time to Event Outcomes in Randomized Controlled Trials by Generalized Additive Models

    PubMed Central

    Argyropoulos, Christos; Unruh, Mark L.

    2015-01-01

    Background Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking. Methods By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM) can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated) and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population. Findings PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect) but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data. Conclusions By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial

  18. Auxetic materials with large negative Poisson's ratios based on highly oriented carbon nanotube structures

    NASA Astrophysics Data System (ADS)

    Chen, Luzhuo; Liu, Changhong; Wang, Jiaping; Zhang, Wei; Hu, Chunhua; Fan, Shoushan

    2009-06-01

    Auxetic materials with large negative Poisson's ratios are fabricated by highly oriented carbon nanotube structures. The Poisson's ratio can be obtained down to -0.50. Furthermore, negative Poisson's ratios can be maintained in the carbon nanotube/polymer composites when the nanotubes are embedded, while the composites show much better mechanical properties including larger strain-to-failure (˜22%) compared to the pristine nanotube thin film (˜3%). A theoretical model is developed to predict the Poisson's ratios. It indicates that the large negative Poisson's ratios are caused by the realignment of curved nanotubes during stretching and the theoretical predictions agree well with the experimental results.

  19. Poisson's ratio over two centuries: challenging hypotheses

    PubMed Central

    Greaves, G. Neville

    2013-01-01

    This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094

  20. Complex Modelling Scheme Of An Additive Manufacturing Centre

    NASA Astrophysics Data System (ADS)

    Popescu, Liliana Georgeta

    2015-09-01

    This paper presents a modelling scheme sustaining the development of an additive manufacturing research centre model and its processes. This modelling is performed using IDEF0, the resulting model process representing the basic processes required in developing such a centre in any university. While the activities presented in this study are those recommended in general, changes may occur in specific existing situations in a research centre.

  1. Graded geometry and Poisson reduction

    SciTech Connect

    Cattaneo, A. S.; Zambon, M.

    2009-02-02

    The main result extends the Marsden-Ratiu reduction theorem in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof. Further, we provide an alternative algebraic proof for the main result.

  2. Tuning the Poisson's Ratio of Biomaterials for Investigating Cellular Response

    PubMed Central

    Meggs, Kyle; Qu, Xin; Chen, Shaochen

    2013-01-01

    Cells sense and respond to mechanical forces, regardless of whether the source is from a normal tissue matrix, an adjacent cell or a synthetic substrate. In recent years, cell response to surface rigidity has been extensively studied by modulating the elastic modulus of poly(ethylene glycol) (PEG)-based hydrogels. In the context of biomaterials, Poisson's ratio, another fundamental material property parameter has not been explored, primarily because of challenges involved in tuning the Poisson's ratio in biological scaffolds. Two-photon polymerization is used to fabricate suspended web structures that exhibit positive and negative Poisson's ratio (NPR), based on analytical models. NPR webs demonstrate biaxial expansion/compression behavior, as one or multiple cells apply local forces and move the structures. Unusual cell division on NPR structures is also demonstrated. This methodology can be used to tune the Poisson's ratio of several photocurable biomaterials and could have potential implications in the field of mechanobiology. PMID:24076754

  3. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database. PMID:26987377

  4. Additive-multiplicative rates model for recurrent events.

    PubMed

    Liu, Yanyan; Wu, Yuanshan; Cai, Jianwen; Zhou, Haibo

    2010-07-01

    Recurrent events are frequently encountered in biomedical studies. Evaluating the covariates effects on the marginal recurrent event rate is of practical interest. There are mainly two types of rate models for the recurrent event data: the multiplicative rates model and the additive rates model. We consider a more flexible additive-multiplicative rates model for analysis of recurrent event data, wherein some covariate effects are additive while others are multiplicative. We formulate estimating equations for estimating the regression parameters. The estimators for these regression parameters are shown to be consistent and asymptotically normally distributed under appropriate regularity conditions. Moreover, the estimator of the baseline mean function is proposed and its large sample properties are investigated. We also conduct simulation studies to evaluate the finite sample behavior of the proposed estimators. A medical study of patients with cystic fibrosis suffered from recurrent pulmonary exacerbations is provided for illustration of the proposed method. PMID:20229314

  5. Accelerated Nucleation Due to Trace Additives: A Fluctuating Coverage Model.

    PubMed

    Poon, Geoffrey G; Peters, Baron

    2016-03-01

    We develop a theory to account for variable coverage of trace additives that lower the interfacial free energy for nucleation. The free energy landscape is based on classical nucleation theory and a statistical mechanical model for Langmuir adsorption. Dynamics are modeled by diffusion-controlled attachment and detachment of solutes and adsorbing additives. We compare the mechanism and kinetics from a mean-field model, a projection of the dynamics and free energy surface onto nucleus size, and a full two-dimensional calculation using Kramers-Langer-Berezhkovskii-Szabo theory. The fluctuating coverage model predicts rates more accurately than mean-field models of the same process primarily because it more accurately estimates the potential of mean force along the size coordinate. PMID:26485064

  6. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  7. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  8. Modelling the behaviour of additives in gun barrels

    NASA Astrophysics Data System (ADS)

    Rhodes, N.; Ludwig, J. C.

    1986-01-01

    A mathematical model which predicts the flow and heat transfer in a gun barrel is described. The model is transient, two-dimensional and equations are solved for velocities and enthalpies of a gas phase, which arises from the combustion of propellant and cartridge case, for particle additives which are released from the case; volume fractions of the gas and particles. Closure of the equations is obtained using a two-equation turbulence model. Preliminary calculations are described in which the proportions of particle additives in the cartridge case was altered. The model gives a good prediction of the ballistic performance and the gas to wall heat transfer. However, the expected magnitude of reduction in heat transfer when particles are present is not predicted. The predictions of gas flow invalidate some of the assumptions made regarding case and propellant behavior during combustion and further work is required to investigate these effects and other possible interactions, both chemical and physical, between gas and particles.

  9. A generalized gyrokinetic Poisson solver

    SciTech Connect

    Lin, Z.; Lee, W.W.

    1995-03-01

    A generalized gyrokinetic Poisson solver has been developed, which employs local operations in the configuration space to compute the polarization density response. The new technique is based on the actual physical process of gyrophase-averaging. It is useful for nonlocal simulations using general geometry equilibrium. Since it utilizes local operations rather than the global ones such as FFT, the new method is most amenable to massively parallel algorithms.

  10. An Additional Symmetry in the Weinberg-Salam Model

    SciTech Connect

    Bakker, B.L.G.; Veselov, A.I.; Zubkov, M.A.

    2005-06-01

    An additional Z{sub 6} symmetry hidden in the fermion and Higgs sectors of the Standard Model has been found recently. It has a singular nature and is connected to the centers of the SU(3) and SU(2) subgroups of the gauge group. A lattice regularization of the Standard Model was constructed that possesses this symmetry. In this paper, we report our results on the numerical simulation of its electroweak sector.

  11. Modeling uranium transport in acidic contaminated groundwater with base addition

    SciTech Connect

    Zhang, Fan; Luo, Wensui; Parker, Jack C.; Brooks, Scott C; Watson, David B; Jardine, Philip; Gu, Baohua

    2011-01-01

    This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO{sub 3}{sup -}, SO{sub 4}{sup 2-}, U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.

  12. Using Set Model for Learning Addition of Integers

    ERIC Educational Resources Information Center

    Lestari, Umi Puji; Putri, Ratu Ilma Indra; Hartono, Yusuf

    2015-01-01

    This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the…

  13. Numerical Solution of 3D Poisson-Nernst-Planck Equations Coupled with Classical Density Functional Theory for Modeling Ion and Electron Transport in a Confined Environment

    SciTech Connect

    Meng, Da; Zheng, Bin; Lin, Guang; Sushko, Maria L.

    2014-08-29

    We have developed efficient numerical algorithms for the solution of 3D steady-state Poisson-Nernst-Planck equations (PNP) with excess chemical potentials described by the classical density functional theory (cDFT). The coupled PNP equations are discretized by finite difference scheme and solved iteratively by Gummel method with relaxation. The Nernst-Planck equations are transformed into Laplace equations through the Slotboom transformation. Algebraic multigrid method is then applied to efficiently solve the Poisson equation and the transformed Nernst-Planck equations. A novel strategy for calculating excess chemical potentials through fast Fourier transforms is proposed which reduces computational complexity from O(N2) to O(NlogN) where N is the number of grid points. Integrals involving Dirac delta function are evaluated directly by coordinate transformation which yields more accurate result compared to applying numerical quadrature to an approximated delta function. Numerical results for ion and electron transport in solid electrolyte for Li ion batteries are shown to be in good agreement with the experimental data and the results from previous studies.

  14. Estimating soil water retention using soil component additivity model

    NASA Astrophysics Data System (ADS)

    Zeiliger, A.; Ermolaeva, O.; Semenov, V.

    2009-04-01

    Soil water retention is a major soil hydraulic property that governs soil functioning in ecosystems and greatly affects soil management. Data on soil water retention are used in research and applications in hydrology, agronomy, meteorology, ecology, environmental protection, and many other soil-related fields. Soil organic matter content and composition affect both soil structure and adsorption properties; therefore water retention may be affected by changes in soil organic matter that occur because of both climate change and modifications of management practices. Thus, effects of organic matter on soil water retention should be understood and quantified. Measurement of soil water retention is relatively time-consuming, and become impractical when soil hydrologic estimates are needed for large areas. One approach to soil water retention estimation from readily available data is based on the hypothesis that soil water retention may be estimated as an additive function obtained by summing up water retention of pore subspaces associated with soil textural and/or structural components and organic matter. The additivity model and was tested with 550 soil samples from the international database UNSODA and 2667 soil samples from the European database HYPRES containing all textural soil classes after USDA soil texture classification. The root mean square errors (RMSEs) of the volumetric water content estimates for UNSODA vary from 0.021 m3m-3 for coarse sandy loam to 0.075 m3m-3 for sandy clay. Obtained RMSEs are at the lower end of the RMSE range for regression-based water retention estimates found in literature. Including retention estimates of organic matter significantly improved RMSEs. The attained accuracy warrants testing the 'additivity' model with additional soil data and improving this model to accommodate various types of soil structure. Keywords: soil water retention, soil components, additive model, soil texture, organic matter.

  15. Predicting stabilizing mutations in proteins using Poisson-Boltzmann based models: study of unfolded state ensemble models and development of a successful binary classifier based on residue interaction energies.

    PubMed

    Estrada, Jorge; Echenique, Pablo; Sancho, Javier

    2015-12-14

    In many cases the stability of a protein has to be increased to permit its biotechnological use. Rational methods of protein stabilization based on optimizing electrostatic interactions have provided some fine successful predictions. However, the precise calculation of stabilization energies remains challenging, one reason being that the electrostatic effects on the unfolded state are often neglected. We have explored here the feasibility of incorporating Poisson-Boltzmann model electrostatic calculations performed on representations of the unfolded state as large ensembles of geometrically optimized conformations calculated using the ProtSA server. Using a data set of 80 electrostatic mutations experimentally tested in two-state proteins, the predictive performance of several such models has been compared to that of a simple one that considers an unfolded structure of non-interacting residues. The unfolded ensemble models, while showing correlation between the predicted stabilization values and the experimental ones, are worse than the simple model, suggesting that the ensembles do not capture well the energetics of the unfolded state. A more attainable goal is classifying potential mutations as either stabilizing or non-stabilizing, rather than accurately calculating their stabilization energies. To implement a fast classification method that can assist in selecting stabilizing mutations, we have used a much simpler electrostatic model based only on the native structure and have determined its precision using different stabilizing energy thresholds. The binary classifier developed finds 7 true stabilizing mutants out of every 10 proposed candidates and can be used as a robust tool to propose stabilizing mutations. PMID:26530878

  16. Electrostatic forces in the Poisson-Boltzmann systems

    PubMed Central

    Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2013-01-01

    Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models. PMID:24028101

  17. Additions to Mars Global Reference Atmospheric Model (Mars-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.

    1991-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification has also been made which allows heights to go below local terrain height and return realistic pressure, density, and temperature (not the surface values) as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local valley areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch version of Mars-GRAM are presented.

  18. Additions to Mars Global Reference Atmospheric Model (MARS-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie

    1992-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification was also made which allows heights to go 'below' local terrain height and return 'realistic' pressure, density, and temperature, and not the surface values, as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local 'valley' areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch versions of Mars-GRAM are presented.

  19. Understanding Rasch Measurement: The Rasch Model, Additive Conjoint Measurement, and New Models of Probabilistic Measurement Theory.

    ERIC Educational Resources Information Center

    Karabatsos, George

    2001-01-01

    Describes similarities and differences between additive conjoint measurement and the Rasch model, and formalizes some new nonparametric item response models that are, in a sense, probabilistic measurement theory models. Applies these new models to published and simulated data. (SLD)

  20. Electrodiffusion Models of Neurons and Extracellular Space Using the Poisson-Nernst-Planck Equations—Numerical Simulation of the Intra- and Extracellular Potential for an Axon Model

    PubMed Central

    Pods, Jurgis; Schönke, Johannes; Bastian, Peter

    2013-01-01

    In neurophysiology, extracellular signals—as measured by local field potentials (LFP) or electroencephalography—are of great significance. Their exact biophysical basis is, however, still not fully understood. We present a three-dimensional model exploiting the cylinder symmetry of a single axon in extracellular fluid based on the Poisson-Nernst-Planck equations of electrodiffusion. The propagation of an action potential along the axonal membrane is investigated by means of numerical simulations. Special attention is paid to the Debye layer, the region with strong concentration gradients close to the membrane, which is explicitly resolved by the computational mesh. We focus on the evolution of the extracellular electric potential. A characteristic up-down-up LFP waveform in the far-field is found. Close to the membrane, the potential shows a more intricate shape. A comparison with the widely used line source approximation reveals similarities and demonstrates the strong influence of membrane currents. However, the electrodiffusion model shows another signal component stemming directly from the intracellular electric field, called the action potential echo. Depending on the neuronal configuration, this might have a significant effect on the LFP. In these situations, electrodiffusion models should be used for quantitative comparisons with experimental data. PMID:23823244

  1. Backbone additivity in the transfer model of protein solvation

    PubMed Central

    Hu, Char Y; Kokubo, Hironori; Lynch, Gillian C; Bolen, D Wayne; Pettitt, B Montgomery

    2010-01-01

    The transfer model implying additivity of the peptide backbone free energy of transfer is computationally tested. Molecular dynamics simulations are used to determine the extent of change in transfer free energy (ΔGtr) with increase in chain length of oligoglycine with capped end groups. Solvation free energies of oligoglycine models of varying lengths in pure water and in the osmolyte solutions, 2M urea and 2M trimethylamine N-oxide (TMAO), were calculated from simulations of all atom models, and ΔGtr values for peptide backbone transfer from water to the osmolyte solutions were determined. The results show that the transfer free energies change linearly with increasing chain length, demonstrating the principle of additivity, and provide values in reasonable agreement with experiment. The peptide backbone transfer free energy contributions arise from van der Waals interactions in the case of transfer to urea, but from electrostatics on transfer to TMAO solution. The simulations used here allow for the calculation of the solvation and transfer free energy of longer oligoglycine models to be evaluated than is currently possible through experiment. The peptide backbone unit computed transfer free energy of −54 cal/mol/M compares quite favorably with −43 cal/mol/M determined experimentally. PMID:20306490

  2. Backbone Additivity in the Transfer Model of Protein Solvation

    SciTech Connect

    Hu, Char Y.; Kokubo, Hironori; Lynch, Gillian C.; Bolen, D Wayne; Pettitt, Bernard M.

    2010-05-01

    The transfer model implying additivity of the peptide backbone free energy of transfer is computationally tested. Molecular dynamics simulations are used to determine the extent of change in transfer free energy (ΔGtr) with increase in chain length of oligoglycine with capped end groups. Solvation free energies of oligoglycine models of varying lengths in pure water and in the osmolyte solutions, 2M urea and 2M trimethylamine N-oxide (TMAO), were calculated from simulations of all atom models, and ΔGtr values for peptide backbone transfer from water to the osmolyte solutions were determined. The results show that the transfer free energies change linearly with increasing chain length, demonstrating the principle of additivity, and provide values in reasonable agreement with experiment. The peptide backbone transfer free energy contributions arise from van der Waals interactions in the case of transfer to urea, but from electrostatics on transfer to TMAO solution. The simulations used here allow for the calculation of the solvation and transfer free energy of longer oligoglycine models to be evaluated than is currently possible through experiment. The peptide backbone unit computed transfer free energy of –54 cal/mol/Mcompares quite favorably with –43 cal/mol/M determined experimentally.

  3. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  4. Tunable negative Poisson's ratio in hydrogenated graphene.

    PubMed

    Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming

    2016-09-21

    We perform molecular dynamics simulations to investigate the effect of hydrogenation on the Poisson's ratio of graphene. It is found that the value of the Poisson's ratio of graphene can be effectively tuned from positive to negative by varying the percentage of hydrogenation. Specifically, the Poisson's ratio decreases with an increase in the percentage of hydrogenation, and reaches a minimum value of -0.04 when the percentage of hydrogenation is about 50%. The Poisson's ratio starts to increase upon a further increase of the percentage of hydrogenation. The appearance of a minimum negative Poisson's ratio in the hydrogenated graphene is attributed to the suppression of the hydrogenation-induced ripples during the stretching of graphene. Our results demonstrate that hydrogenation is a valuable approach for tuning the Poisson's ratio from positive to negative in graphene. PMID:27536878

  5. Addition Table of Colours: Additive and Subtractive Mixtures Described Using a Single Reasoning Model

    ERIC Educational Resources Information Center

    Mota, A. R.; Lopes dos Santos, J. M. B.

    2014-01-01

    Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…

  6. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961

  7. Additive Manufacturing of Medical Models--Applications in Rhinology.

    PubMed

    Raos, Pero; Klapan, Ivica; Galeta, Tomislav

    2015-09-01

    In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area. PMID:26898064

  8. Surface reconstruction through poisson disk sampling.

    PubMed

    Hou, Wenguang; Xu, Zekai; Qin, Nannan; Xiong, Dongping; Ding, Mingyue

    2015-01-01

    This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. PMID:25915744

  9. Surface Reconstruction through Poisson Disk Sampling

    PubMed Central

    Hou, Wenguang; Xu, Zekai; Qin, Nannan; Xiong, Dongping; Ding, Mingyue

    2015-01-01

    This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. PMID:25915744

  10. Multiscale Modeling of Powder Bed–Based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Markl, Matthias; Körner, Carolin

    2016-07-01

    Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.

  11. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect. PMID:18387273

  12. Alternative Derivations for the Poisson Integral Formula

    ERIC Educational Resources Information Center

    Chen, J. T.; Wu, C. S.

    2006-01-01

    Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…

  13. Metal [100] Nanowires with Negative Poisson's Ratio.

    PubMed

    Ho, Duc Tam; Kwon, Soon-Yong; Kim, Sung Youb

    2016-01-01

    When materials are under stretching, occurrence of lateral contraction of materials is commonly observed. This is because Poisson's ratio, the quantity describes the relationship between a lateral strain and applied strain, is positive for nearly all materials. There are some reported structures and materials having negative Poisson's ratio. However, most of them are at macroscale, and reentrant structures and rigid rotating units are the main mechanisms for their negative Poisson's ratio behavior. Here, with numerical and theoretical evidence, we show that metal [100] nanowires with asymmetric cross-sections such as rectangle or ellipse can exhibit negative Poisson's ratio behavior. Furthermore, the negative Poisson's ratio behavior can be further improved by introducing a hole inside the asymmetric nanowires. We show that the surface effect inducing the asymmetric stresses inside the nanowires is a main origin of the superior property. PMID:27282358

  14. Negative Poisson's Ratio in Single-Layer Graphene Ribbons.

    PubMed

    Jiang, Jin-Wu; Park, Harold S

    2016-04-13

    The Poisson's ratio characterizes the resultant strain in the lateral direction for a material under longitudinal deformation. Though negative Poisson's ratios (NPR) are theoretically possible within continuum elasticity, they are most frequently observed in engineered materials and structures, as they are not intrinsic to many materials. In this work, we report NPR in single-layer graphene ribbons, which results from the compressive edge stress induced warping of the edges. The effect is robust, as the NPR is observed for graphene ribbons with widths smaller than about 10 nm, and for tensile strains smaller than about 0.5% with NPR values reaching as large as -1.51. The NPR is explained analytically using an inclined plate model, which is able to predict the Poisson's ratio for graphene sheets of arbitrary size. The inclined plate model demonstrates that the NPR is governed by the interplay between the width (a bulk property), and the warping amplitude of the edge (an edge property), which eventually yields a phase diagram determining the sign of the Poisson's ratio as a function of the graphene geometry. PMID:26986994

  15. An Advanced Manipulator For Poisson Series With Numerical Coefficients

    NASA Astrophysics Data System (ADS)

    Biscani, Francesco; Casotto, S.

    2006-06-01

    The availability of an efficient and featureful manipulator for Poisson deries with numerical coefficients is a standard need for celestial mechanicians and has arisen during our work on the analytical development of the Tide-Generating-Potential (TGP). In the harmonic expansion of the TGP the Poisson series appearing in the theories of motion of the celestial bodies are subjected to a wide set of mathematical operations, ranging from simple additions and multiplications to more sophisticated operations on Legendre polynomials and spherical harmonics with Poisson series as arguments. To perform these operations we have developed an algebraic manipulator, called Piranha, structured as an object-oriented multi-platform C++ library. Piranha handles series with real and complex coefficients, and operates with an arbitrary degree of precision. It supports advanced features such as trigonometric operations and the generation of special functions from Poisson series. Piranha is provided with a proof-of-concept, multi-platform GUI, which serves as a testbed and benchmark for the library. We describe Piranha's architecture and characteristics, what it accomplishes currently and how it will be extended in the future (e.g., to handle series with symbolic coefficients in a consistent fashion with its current design).

  16. Universality of Poisson indicator and Fano factor of transport event statistics in ion channels and enzyme kinetics.

    PubMed

    Chaudhury, Srabanti; Cao, Jianshu; Sinitsyn, Nikolai A

    2013-01-17

    We consider a generic stochastic model of ion transport through a single channel with arbitrary internal structure and kinetic rates of transitions between internal states. This model is also applicable to describe kinetics of a class of enzymes in which turnover events correspond to conversion of substrate into product by a single enzyme molecule. We show that measurement of statistics of single molecule transition time through the channel contains only restricted information about internal structure of the channel. In particular, the most accessible flux fluctuation characteristics, such as the Poisson indicator (P) and the Fano factor (F) as function of solute concentration, depend only on three parameters in addition to the parameters of the Michaelis-Menten curve that characterizes average current through the channel. Nevertheless, measurement of Poisson indicator or Fano factor for such renewal processes can discriminate reactions with multiple intermediate steps as well as provide valuable information about the internal kinetic rates. PMID:23198705

  17. Negative Poisson's ratio materials via isotropic interactions.

    PubMed

    Rechtsman, Mikael C; Stillinger, Frank H; Torquato, Salvatore

    2008-08-22

    We show that under tension a classical many-body system with only isotropic pair interactions in a crystalline state can, counterintuitively, have a negative Poisson's ratio, or auxetic behavior. We derive the conditions under which the triangular lattice in two dimensions and lattices with cubic symmetry in three dimensions exhibit a negative Poisson's ratio. In the former case, the simple Lennard-Jones potential can give rise to auxetic behavior. In the latter case, a negative Poisson's ratio can be exhibited even when the material is constrained to be elastically isotropic. PMID:18764632

  18. Percolation model with an additional source of disorder.

    PubMed

    Kundu, Sumanta; Manna, S S

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p. Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R_{1} and R_{2} of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R_{1}-R_{2} plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is p_{c}(sq), the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R∈{0,R_{0}} and a percolation transition is observed with R_{0} as the control variable, similar to the site occupation probability. PMID:27415234

  19. Percolation model with an additional source of disorder

    NASA Astrophysics Data System (ADS)

    Kundu, Sumanta; Manna, S. S.

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.

  20. Numerical methods for the Poisson-Fermi equation in electrolytes

    NASA Astrophysics Data System (ADS)

    Liu, Jinn-Liang

    2013-08-01

    The Poisson-Fermi equation proposed by Bazant, Storey, and Kornyshev [Phys. Rev. Lett. 106 (2011) 046102] for ionic liquids is applied to and numerically studied for electrolytes and biological ion channels in three-dimensional space. This is a fourth-order nonlinear PDE that deals with both steric and correlation effects of all ions and solvent molecules involved in a model system. The Fermi distribution follows from classical lattice models of configurational entropy of finite size ions and solvent molecules and hence prevents the long and outstanding problem of unphysical divergence predicted by the Gouy-Chapman model at large potentials due to the Boltzmann distribution of point charges. The equation reduces to Poisson-Boltzmann if the correlation length vanishes. A simplified matched interface and boundary method exhibiting optimal convergence is first developed for this equation by using a gramicidin A channel model that illustrates challenging issues associated with the geometric singularities of molecular surfaces of channel proteins in realistic 3D simulations. Various numerical methods then follow to tackle a range of numerical problems concerning the fourth-order term, nonlinearity, stability, efficiency, and effectiveness. The most significant feature of the Poisson-Fermi equation, namely, its inclusion of steric and correlation effects, is demonstrated by showing good agreement with Monte Carlo simulation data for a charged wall model and an L type calcium channel model.

  1. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  2. Activity of Excitatory Neuron with Delayed Feedback Stimulated with Poisson Stream is Non-Markov

    NASA Astrophysics Data System (ADS)

    Vidybida, Alexander K.

    2015-09-01

    For a class of excitatory spiking neuron models with delayed feedback fed with a Poisson stochastic process, it is proven that the stream of output interspike intervals cannot be presented as a Markov process of any order.

  3. How much additional model complexity do the use of catchment hydrological signatures, additional data and expert knowledge warrant?

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; RUIZ, L.; Gascuel-odoux, C.; Savenije, H.

    2013-12-01

    In the frequent absence of sufficient suitable data to constrain hydrological models, it is not uncommon to represent catchments at a range of scales by lumped model set-ups. Although process heterogeneity can average out on the catchment scale to generate simple catchment integrated responses whose general flow features can frequently be reproduced by lumped models, these models often fail to get details of the flow pattern as well as catchment internal dynamics, such as groundwater level changes, right to a sufficient degree, resulting in considerable predictive uncertainty. Traditionally, models are constrained by only one or two objectives functions, which does not warrant more than a handful of parameters to avoid elevated predictive uncertainty, thereby preventing more complex model set-ups accounting for increased process heterogeneity. In this study it was tested how much additional process heterogeneity is warranted in models when optimizing the model calibration strategy, using additional data and expert knowledge. Long-term timeseries of flow and groundwater levels for small nested experimental catchments in French Brittany with considerable differences in geology, topography and flow regime were used in this study to test which degree of model process heterogeneity is warranted with increased availability of information. In a first step, as a benchmark, the system was treated as one lumped entity and the model was trained based only on its ability to reproduce the hydrograph. Although it was found that the overall modelled flow generally reflects the observed flow response quite well, the internal system dynamics could not be reproduced. In further steps the complexity of this model was gradually increased, first by adding a separate riparian reservoir to the lumped set-up and then by a semi-distributed set-up, allowing for independent, parallel model structures, representing the contrasting nested catchments. Although calibration performance increased

  4. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models. PMID:26336695

  5. The mechanical influences of the graded distribution in the cross-sectional shape, the stiffness and Poisson׳s ratio of palm branches.

    PubMed

    Liu, Wangyu; Wang, Ningling; Jiang, Xiaoyong; Peng, Yujian

    2016-07-01

    The branching system plays an important role in maintaining the survival of palm trees. Due to the nature of monocots, no additional vascular bundles can be added in the palm tree tissue as it ages. Therefore, the changing of the cross-sectional area in the palm branch creates a graded distribution in the mechanical properties of the tissue. In the present work, this graded distribution in the tissue mechanical properties from sheath to petiole were studied with a multi-scale modeling approach. Then, the entire palm branch was reconstructed and analyzed using finite element methods. The variation of the elastic modulus can lower the level of mechanical stress in the sheath and also allow the branch to have smaller values of pressure on the other branches. Under impact loading, the enhanced frictional dissipation at the surfaces of adjacent branches benefits from the large Poisson׳s ratio of the sheath tissue. These findings can help to link the wind resistance ability of palm trees to their graded materials distribution in the branching system. PMID:26807774

  6. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. PMID:26743048

  7. Events in time: Basic analysis of Poisson data

    SciTech Connect

    Engelhardt, M.E.

    1994-09-01

    The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.

  8. On third Poisson structure of KdV equation

    SciTech Connect

    Gorsky, A.; Marshakov, A.; Orlov, A.

    1995-12-01

    The third Poisson structure of the KdV equation in terms of canonical {open_quote}free fields{close_quote} and the reduced WZNW model is discussed. We prove that it is {open_quotes}diagonalized{close_quotes} in the Lagrange variables which were used before in the formulation of 2d gravity. We propose a quantum path integral for the KdV equation based on this representation.

  9. Addition of Diffusion Model to MELCOR and Comparison with Data

    SciTech Connect

    Brad Merrill; Richard Moore; Chang Oh

    2004-06-01

    A chemical diffusion model was incorporated into the thermal-hydraulics package of the MELCOR Severe Accident code (Reference 1) for analyzing air ingress events for a very high temperature gas-cooled reactor.

  10. Modelling dissimilarity: generalizing ultrametric and additive tree representations.

    PubMed

    Hubert, L; Arabie, P; Meulman, J

    2001-05-01

    Methods for the hierarchical clustering of an object set produce a sequence of nested partitions such that object classes within each successive partition are constructed from the union of object classes present at the previous level. Any such sequence of nested partitions can in turn be characterized by an ultrametric. An approach to generalizing an (ultrametric) representation is proposed in which the nested character of the partition sequence is relaxed and replaced by the weaker requirement that the classes within each partition contain objects consecutive with respect to a fixed ordering of the objects. A method for fitting such a structure to a given proximity matrix is discussed, along with several alternative strategies for graphical representation. Using this same ultrametric extension, additive tree representations can also be generalized by replacing the ultrametric component in the decomposition of an additive tree (into an ultrametric and a centroid metric). A common numerical illustration is developed and maintained throughout the paper. PMID:11393895

  11. Additional Research Needs to Support the GENII Biosphere Models

    SciTech Connect

    Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen

    2013-11-30

    In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed

  12. A generalized Poisson solver for first-principles device simulations

    NASA Astrophysics Data System (ADS)

    Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost

    2016-01-01

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.

  13. A generalized Poisson solver for first-principles device simulations.

    PubMed

    Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated. PMID:26827208

  14. Universal Negative Poisson Ratio of Self-Avoiding Fixed-Connectivity Membranes

    SciTech Connect

    Bowick, M.; Cacciuto, A.; Thorleifsson, G.; Travesset, A.

    2001-10-01

    We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be {sigma}=-0.37(6) , in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes {sigma}=-0.32(4) . Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science.

  15. Universal negative poisson ratio of self-avoiding fixed-connectivity membranes.

    PubMed

    Bowick, M; Cacciuto, A; Thorleifsson, G; Travesset, A

    2001-10-01

    We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be sigma = -0.37(6), in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes sigma = -0.32(4). Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science. PMID:11580677

  16. Sensitivities to parameterization in the size-modified Poisson-Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Harris, Robert C.; Boschitsch, Alexander H.; Fenley, Marcia O.

    2014-02-01

    Experimental results have demonstrated that the numbers of counterions surrounding nucleic acids differ from those predicted by the nonlinear Poisson-Boltzmann equation, NLPBE. Some studies have fit these data against the ion size in the size-modified Poisson-Boltzmann equation, SMPBE, but the present study demonstrates that other parameters, such as the Stern layer thickness and the molecular surface definition, can change the number of bound ions by amounts comparable to varying the ion size. These parameters will therefore have to be fit simultaneously against experimental data. In addition, the data presented here demonstrate that the derivative, SK, of the electrostatic binding free energy, ΔGel, with respect to the logarithm of the salt concentration is sensitive to these parameters, and experimental measurements of SK could be used to parameterize the model. However, although better values for the Stern layer thickness and ion size and better molecular surface definitions could improve the model's predictions of the numbers of ions around biomolecules and SK, ΔGel itself is more sensitive to parameters, such as the interior dielectric constant, which in turn do not significantly affect the distributions of ions around biomolecules. Therefore, improved estimates of the ion size and Stern layer thickness to use in the SMPBE will not necessarily improve the model's predictions of ΔGel.

  17. Sensitivities to parameterization in the size-modified Poisson-Boltzmann equation.

    PubMed

    Harris, Robert C; Boschitsch, Alexander H; Fenley, Marcia O

    2014-02-21

    Experimental results have demonstrated that the numbers of counterions surrounding nucleic acids differ from those predicted by the nonlinear Poisson-Boltzmann equation, NLPBE. Some studies have fit these data against the ion size in the size-modified Poisson-Boltzmann equation, SMPBE, but the present study demonstrates that other parameters, such as the Stern layer thickness and the molecular surface definition, can change the number of bound ions by amounts comparable to varying the ion size. These parameters will therefore have to be fit simultaneously against experimental data. In addition, the data presented here demonstrate that the derivative, SK, of the electrostatic binding free energy, ΔGel, with respect to the logarithm of the salt concentration is sensitive to these parameters, and experimental measurements of SK could be used to parameterize the model. However, although better values for the Stern layer thickness and ion size and better molecular surface definitions could improve the model's predictions of the numbers of ions around biomolecules and SK, ΔGel itself is more sensitive to parameters, such as the interior dielectric constant, which in turn do not significantly affect the distributions of ions around biomolecules. Therefore, improved estimates of the ion size and Stern layer thickness to use in the SMPBE will not necessarily improve the model's predictions of ΔGel. PMID:24559370

  18. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be

  19. Finite-size effects and percolation properties of Poisson geometries.

    PubMed

    Larmier, C; Dumonteil, E; Malvagi, F; Mazzolo, A; Zoia, A

    2016-07-01

    Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d=3. We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size. PMID:27575099

  20. Self-Attracting Poisson Clouds in an Expanding Universe

    NASA Astrophysics Data System (ADS)

    Bertoin, Jean

    We consider the following elementary model for clustering by ballistic aggregation in an expanding universe. At the initial time, there is a doubly infinite sequence of particles lying in a one-dimensional universe that is expanding at constant rate. We suppose that each particle p attracts points at a certain rate a(p)/2 depending only on p, and when two particles, say p and q, collide by the effect of attraction, they merge as a single particle p*q. The main purpose of this work is to point at the following remarkable property of Poisson clouds in these dynamics. Under certain technical conditions, if at the initial time the system is distributed according to a spatially stationary Poisson cloud with intensity μ0, then at any time t > 0, the system will again have a Poissonian distribution, now with intensity μt, where the family solves a generalization of Smoluchowski's coagulation equation.

  1. Finite-size effects and percolation properties of Poisson geometries

    NASA Astrophysics Data System (ADS)

    Larmier, C.; Dumonteil, E.; Malvagi, F.; Mazzolo, A.; Zoia, A.

    2016-07-01

    Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d -dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d =3 . We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size.

  2. The addition of algebraic turbulence modeling to program LAURA

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Thompson, R. A.

    1993-01-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) is modified to allow the calculation of turbulent flows. This is accomplished using the Cebeci-Smith and Baldwin-Lomax eddy-viscosity models in conjunction with the thin-layer Navier-Stokes options of the program. Turbulent calculations can be performed for both perfect-gas and equilibrium flows. However, a requirement of the models is that the flow be attached. It is seen that for slender bodies, adequate resolution of the boundary-layer gradients may require more cells in the normal direction than a laminar solution, even when grid stretching is employed. Results for axisymmetric and three-dimensional flows are presented. Comparison with experimental data and other numerical results reveal generally good agreement, except in the regions of detached flow.

  3. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  4. Magnetostrictive contribution to Poisson ratio of galfenol

    NASA Astrophysics Data System (ADS)

    Paes, V. Z. C.; Mosca, D. H.

    2013-09-01

    In this work we present a detailed study on the magnetostrictive contribution to Poisson ratio for samples under applied mechanical stress. Magnetic contributions to strain and Poisson ratio for cubic materials were derived by accounting elastic and magneto-elastic anisotropy contributions. We apply our theoretical results for a material of interest in magnetomechanics, namely, galfenol (Fe1-xGax). Our results show that there is a non-negligible magnetic contribution in the linear portion of the curve of stress versus strain. The rotation of the magnetization towards [110] crystallographic direction upon application of mechanical stress leads to an auxetic behavior, i.e., exhibiting Poisson ratio with negative values. This magnetic contribution to auxetic behavior provides a novel insight for the discussion of theoretical and experimental developments of materials that display unusual mechanical properties.

  5. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  6. Efficient gradient projection methods for edge-preserving removal of Poisson noise

    NASA Astrophysics Data System (ADS)

    Zanella, R.; Boccacci, P.; Zanni, L.; Bertero, M.

    2009-04-01

    Several methods based on different image models have been proposed and developed for image denoising. Some of them, such as total variation (TV) and wavelet thresholding, are based on the assumption of additive Gaussian noise. Recently the TV approach has been extended to the case of Poisson noise, a model describing the effect of photon counting in applications such as emission tomography, microscopy and astronomy. For the removal of this kind of noise we consider an approach based on a constrained optimization problem, with an objective function describing TV and other edge-preserving regularizations of the Kullback-Leibler divergence. We introduce a new discrepancy principle for the choice of the regularization parameter, which is justified by the statistical properties of the Poisson noise. For solving the optimization problem we propose a particular form of a general scaled gradient projection (SGP) method, recently introduced for image deblurring. We derive the form of the scaling from a decomposition of the gradient of the regularization functional into a positive and a negative part. The beneficial effect of the scaling is proved by means of numerical simulations, showing that the performance of the proposed form of SGP is superior to that of the most efficient gradient projection methods. An extended numerical analysis of the dependence of the solution on the regularization parameter is also performed to test the effectiveness of the proposed discrepancy principle.

  7. A model of the holographic principle: Randomness and additional dimension

    NASA Astrophysics Data System (ADS)

    Boyarsky, Abraham; Góra, Paweł; Proppe, Harald

    2010-01-01

    In recent years an idea has emerged that a system in a 3-dimensional space can be described from an information point of view by a system on its 2-dimensional boundary. This mysterious correspondence is called the Holographic Principle and has had profound effects in string theory and our perception of space-time. In this note we describe a purely mathematical model of the Holographic Principle using ideas from nonlinear dynamical systems theory. We show that a random map on the surface S of a 3-dimensional open ball B has a natural counterpart in B, and the two maps acting in different dimensional spaces have the same entropy. We can reverse this construction if we start with a special 3-dimensional map in B called a skew product. The key idea is to use the randomness, as imbedded in the parameter of the 2-dimensional random map, to define a third dimension. The main result shows that if we start with an arbitrary dynamical system in B with entropy E we can construct a random map on S whose entropy is arbitrarily close to E.

  8. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM)

  9. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.; Gomez, Carlos

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.

  10. Poisson Noise Removal in Spherical Multichannel Images: Application to Fermi data

    NASA Astrophysics Data System (ADS)

    Schmitt, Jérémy; Starck, Jean-Luc; Fadili, Jalal; Digel, Seth

    2012-03-01

    The Fermi Gamma-ray Space Telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the high-energy gamma-ray sky [5]. Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20MeV and >300 GeV. The LAT is much more sensitive than its predecessor, the energetic gamma ray experiment telescope (EGRET) telescope on the Compton Gamma-ray Observatory, and is expected to find several thousand gamma-ray point sources, which is an order of magnitude more than its predecessor EGRET [13]. Even with its relatively large acceptance (∼2m2 sr), the number of photons detected by the LAT outside the Galactic plane and away from intense sources is relatively low and the sky overall has a diffuse glow from cosmic-ray interactions with interstellar gas and low energy photons that makes a background against which point sources need to be detected. In addition, the per-photon angular resolution of the LAT is relatively poor and strongly energy dependent, ranging from>10° at 20MeV to ∼0.1° above 100 GeV. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. This kind of noise is strongly signal dependent : on the brightest parts of the image like the galactic plane or the brightest sources, we have a lot of photons per pixel, and so the photon noise is low. Outside the galactic plane, the number of photons per pixel is low, which means that the photon noise is high. Such a signal-dependent noise cannot be accurately modeled by a Gaussian distribution. The basic photon-imaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This statistical noise makes the source detection more difficult, consequently it is highly desirable to have an efficient denoising method for spherical

  11. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  12. Easy Demonstration of the Poisson Spot

    ERIC Educational Resources Information Center

    Gluck, Paul

    2010-01-01

    Many physics teachers have a set of slides of single, double and multiple slits to show their students the phenomena of interference and diffraction. Thomas Young's historic experiments with double slits were indeed a milestone in proving the wave nature of light. But another experiment, namely the Poisson spot, was also important historically and…

  13. On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris

    NASA Technical Reports Server (NTRS)

    Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt

    2007-01-01

    A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.

  14. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  15. Theory of multicolor lattice gas - A cellular automaton Poisson solver

    NASA Technical Reports Server (NTRS)

    Chen, H.; Matthaeus, W. H.; Klein, L. W.

    1990-01-01

    The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.

  16. Polarizable Atomic Multipole Solutes in a Poisson-Boltzmann Continuum

    PubMed Central

    Schnieders, Michael J.; Baker, Nathan A.; Ren, Pengyu; Ponder, Jay W.

    2008-01-01

    Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar molecule in water. A theoretical approach introduced by Onsager used vacuum properties of small molecules, including polarizability, dipole moment and size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation (PBE). Only recently have the classical force fields used for studying biomolecules begun to include explicit polarization in their functional forms. Here we describe the theory underlying a newly developed Polarizable Multipole Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field. As an application of the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment of each protein increased on average by a factor of 1.27 in explicit water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational analysis, and pKa prediction. Introduction of 150 mM salt lowered the electrostatic solvation energy between 2–13 kcal/mole, depending on the formal charge of the protein, but had only a

  17. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  18. Receiver function studies in the southwestern United States and correlation between stratigraphy and Poisson's ratio, southwestern Washington State

    NASA Astrophysics Data System (ADS)

    Kilbride, Fiona Elizabeth Anne

    2000-10-01

    This dissertation consists of two separate lines of research. The first uses the receiver function technique to estimate crustal thickness and Poisson's ratio for three receiver stations in the southwestern United States. One station is located in El Paso because relatively few geophysical experiments have been conducted in the southern Rio Grande rift. Two stations are located on the Colorado Plateau, in an attempt to resolve an ongoing dispute concerning the crustal thickness of this province. The results of the receiver functions studies are used as additional constraints for gravity models along two regional profiles coincident with the much shorter profiles of the Pacific to Arizona Crustal Experiment (PACE) that was led by the U.S. Geological Survey on the Colorado Plateau. Because the profiles extend into adjacent provinces, these models are balanced for isostatic equilibrium and are consistent with elevations predicted by buoyancy calculations. The results are most consistent with a thick (≈50 km) crust for the Colorado Plateau and do not support the presence of large lateral thickness variations within the plateau. The second line of research presented also derives Poisson's ratio, in this case from seismic refraction data. The results are used to interpret a structural cross-section in southwest Washington State and to shed light on a feature of low resistivity (1--5 Om) located in the High Cascades (the Southern Washington Cascades Conductor or SWCC). This feature is delineated by the interpretation of magnetotelluric and seismic reflection profiles and has been interpreted to be largely composed of Lower Eocene marine sedimentary rocks. Both lines of research estimate Poisson's ratio using dissimilar techniques, but have produced results consistent with one another. Poisson's ratio for quartz-rich rocks (such as sandstones and granites) generally lies between 0.23 and 0.26, as exemplified by the upper crust of the Rio Grande rift, and by sedimentary

  19. Poisson filtering of laser ranging data

    NASA Technical Reports Server (NTRS)

    Ricklefs, Randall L.; Shelus, Peter J.

    1993-01-01

    The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described.

  20. Stabilities for nonisentropic Euler-Poisson equations.

    PubMed

    Cheung, Ka Luen; Wong, Sen

    2015-01-01

    We establish the stabilities and blowup results for the nonisentropic Euler-Poisson equations by the energy method. By analysing the second inertia, we show that the classical solutions of the system with attractive forces blow up in finite time in some special dimensions when the energy is negative. Moreover, we obtain the stabilities results for the system in the cases of attractive and repulsive forces. PMID:25861676

  1. Geographically weighted Poisson regression for disease association mapping.

    PubMed

    Nakaya, T; Fotheringham, A S; Brunsdon, C; Charlton, M

    2005-09-15

    This paper describes geographically weighted Poisson regression (GWPR) and its semi-parametric variant as a new statistical tool for analysing disease maps arising from spatially non-stationary processes. The method is a type of conditional kernel regression which uses a spatial weighting function to estimate spatial variations in Poisson regression parameters. It enables us to draw surfaces of local parameter estimates which depict spatial variations in the relationships between disease rates and socio-economic characteristics. The method therefore can be used to test the general assumption made, often without question, in the global modelling of spatial data that the processes being modelled are stationary over space. Equally, it can be used to identify parts of the study region in which 'interesting' relationships might be occurring and where further investigation might be warranted. Such exceptions can easily be missed in traditional global modelling and therefore GWPR provides disease analysts with an important new set of statistical tools. We demonstrate the GWPR approach applied to a data set of working-age deaths in the Tokyo metropolitan area, Japan. The results indicate that there are significant spatial variations (that is, variation beyond that expected from random sampling) in the relationships between working-age mortality and occupational segregation and between working-age mortality and unemployment throughout the Tokyo metropolitan area and that, consequently, the application of traditional 'global' models would yield misleading results. PMID:16118814

  2. First- and second-order Poisson spots

    NASA Astrophysics Data System (ADS)

    Kelly, William R.; Shirley, Eric L.; Migdall, Alan L.; Polyakov, Sergey V.; Hendrix, Kurt

    2009-08-01

    Although Thomas Young is generally given credit for being the first to provide evidence against Newton's corpuscular theory of light, it was Augustin Fresnel who first stated the modern theory of diffraction. We review the history surrounding Fresnel's 1818 paper and the role of the Poisson spot in the associated controversy. We next discuss the boundary-diffraction-wave approach to calculating diffraction effects and show how it can reduce the complexity of calculating diffraction patterns. We briefly discuss a generalization of this approach that reduces the dimensionality of integrals needed to calculate the complete diffraction pattern of any order diffraction effect. We repeat earlier demonstrations of the conventional Poisson spot and discuss an experimental setup for demonstrating an analogous phenomenon that we call a "second-order Poisson spot." Several features of the diffraction pattern can be explained simply by considering the path lengths of singly and doubly bent paths and distinguishing between first- and second-order diffraction effects related to such paths, respectively.

  3. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA. PMID:23741284

  4. On the singularity of the Vlasov-Poisson system

    SciTech Connect

    Zheng, Jian; Qin, Hong

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  5. On the Singularity of the Vlasov-Poisson System

    SciTech Connect

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  6. Additive Manufacturing Modeling and Simulation A Literature Review for Electron Beam Free Form Fabrication

    NASA Technical Reports Server (NTRS)

    Seufzer, William J.

    2014-01-01

    Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.

  7. Hydrodynamic limit of Wigner-Poisson kinetic theory: Revisited

    SciTech Connect

    Akbari-Moghanjoughi, M.

    2015-02-15

    In this paper, we revisit the hydrodynamic limit of the Langmuir wave dispersion relation based on the Wigner-Poisson model in connection with that obtained directly from the original Lindhard dielectric function based on the random-phase-approximation. It is observed that the (fourth-order) expansion of the exact Lindhard dielectric constant correctly reduces to the hydrodynamic dispersion relation with an additional term of fourth-order, beside that caused by the quantum diffraction effect. It is also revealed that the generalized Lindhard dielectric theory accounts for the recently discovered Shukla-Eliasson attractive potential (SEAP). However, the expansion of the exact Lindhard static dielectric function leads to a k{sup 4} term of different magnitude than that obtained from the linearized quantum hydrodynamics model. It is shown that a correction factor of 1/9 should be included in the term arising from the quantum Bohm potential of the momentum balance equation in fluid model in order for a correct plasma dielectric response treatment. Finally, it is observed that the long-range oscillatory screening potential (Friedel oscillations) of type cos(2k{sub F}r)/r{sup 3}, which is a consequence of the divergence of the dielectric function at point k = 2k{sub F} in a quantum plasma, arises due to the finiteness of the Fermi-wavenumber and is smeared out in the limit of very high electron number-densities, typical of white dwarfs and neutron stars. In the very low electron number-density regime, typical of semiconductors and metals, where the Friedel oscillation wavelength becomes much larger compared to the interparticle distances, the SEAP appears with a much deeper potential valley. It is remarked that the fourth-order approximate Lindhard dielectric constant approaches that of the linearized quantum hydrodynamic in the limit if very high electron number-density. By evaluation of the imaginary part of the Lindhard dielectric function, it is shown that the

  8. Multiprocessing and Correction Algorithm of 3D-models for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Anamova, R. R.; Zelenov, S. V.; Kuprikov, M. U.; Ripetskiy, A. V.

    2016-07-01

    This article addresses matters related to additive manufacturing preparation. A layer-by-layer model presentation was developed on the basis of a routing method. Methods for correction of errors in the layer-by-layer model presentation were developed. A multiprocessing algorithm for forming an additive manufacturing batch file was realized.

  9. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty. PMID:14555358

  10. Ductile Titanium Alloy with Low Poisson's Ratio

    SciTech Connect

    Hao, Y. L.; Li, S. J.; Sun, B. B.; Sui, M. L.; Yang, R.

    2007-05-25

    We report a ductile {beta}-type titanium alloy with body-centered cubic (bcc) crystal structure having a low Poisson's ratio of 0.14. The almost identical ultralow bulk and shear moduli of {approx}24 GPa combined with an ultrahigh strength of {approx}0.9 GPa contribute to easy crystal distortion due to much-weakened chemical bonding of atoms in the crystal, leading to significant elastic softening in tension and elastic hardening in compression. The peculiar elastic and plastic deformation behaviors of the alloy are interpreted as a result of approaching the elastic limit of the bcc crystal under applied stress.

  11. Linear stability of stationary solutions of the Vlasov-Poisson system in three dimensions

    SciTech Connect

    Batt, J.; Rein, G. . Mathematisches Inst.); Morrison, P.J. )

    1993-03-01

    Rigorous results on the stability of stationary solutions of the Vlasov-Poisson system are obtained in both the plasma physics and stellar dynamics contexts. It is proven that stationary solutions in the plasma physics (stellar dynamics) case are linearly stable if they are decreasing (increasing) functions of the local, i.e. particle, energy. The main tool in the analysis is the free energy of the system, a conserved quantity. In addition, an appropriate global existence result is proven for the linearized Vlasov-Poisson system and the existence of stationary solutions that satisfy the above stability condition is established.

  12. Reentrant Origami-Based Metamaterials with Negative Poisson's Ratio and Bistability.

    PubMed

    Yasuda, H; Yang, J

    2015-05-01

    We investigate the unique mechanical properties of reentrant 3D origami structures based on the Tachi-Miura polyhedron (TMP). We explore the potential usage as mechanical metamaterials that exhibit tunable negative Poisson's ratio and structural bistability simultaneously. We show analytically and experimentally that the Poisson's ratio changes from positive to negative and vice versa during its folding motion. In addition, we verify the bistable mechanism of the reentrant 3D TMP under rigid origami configurations without relying on the buckling motions of planar origami surfaces. This study forms a foundation in designing and constructing TMP-based metamaterials in the form of bellowslike structures for engineering applications. PMID:26001009

  13. Stationary response of multi-degree-of-freedom vibro-impact systems to Poisson white noises

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Zhu, W. Q.

    2008-01-01

    The stationary response of multi-degree-of-freedom (MDOF) vibro-impact (VI) systems to random pulse trains is studied. The system is formulated as a stochastically excited and dissipated Hamiltonian system. The constraints are modeled as non-linear springs according to the Hertz contact law. The random pulse trains are modeled as Poisson white noises. The approximate stationary probability density function (PDF) for the response of MDOF dissipated Hamiltonian systems to Poisson white noises is obtained by solving the fourth-order generalized Fokker-Planck-Kolmogorov (FPK) equation using perturbation approach. As examples, two-degree-of-freedom (2DOF) VI systems under external and parametric Poisson white noise excitations, respectively, are investigated. The validity of the proposed approach is confirmed by using the results obtained from Monte Carlo simulation. It is shown that the non-Gaussian behaviour depends on the product of the mean arrival rate of the impulses and the relaxation time of the oscillator.

  14. An introduction to modeling longitudinal data with generalized additive models: applications to single-case designs.

    PubMed

    Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M

    2015-03-01

    Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs. PMID:24885341

  15. A technique for determining the Poisson`s ratio of thin films

    SciTech Connect

    Krulevitch, P.

    1996-04-18

    The theory and experimental approach for a new technique used to determine the Poisson`s ratio of thin films are presented. The method involves taking the ratio of curvatures of cantilever beams and plates micromachined out of the film of interest. Curvature is induced by a through-thickness variation in residual stress, or by depositing a thin film under residual stress onto the beams and plates. This approach is made practical by the fact that the two curvatures air, the only required experimental parameters, and small calibration errors cancel when the ratio is taken. To confirm the accuracy of the technique, it was tested on a 2.5 {mu}m thick film of single crystal silicon. Micromachined beams 1 mm long by 100 {mu} wide and plates 700 {mu}m by 700 {mu}m were coated with 35 nm of gold and the curvatures were measured with a scanning optical profilometer. For the orientation tested ([100] film normal, [011] beam axis, [0{bar 1}1] contraction direction) silicon`s Poisson`s ratio is 0.064, and the measured result was 0.066 {+-} 0.043. The uncertainty in this technique is due primarily to variation in the measured curvatures, and should range from {+-} 0.02 to 0.04 with proper measurement technique.

  16. DG Poisson algebra and its universal enveloping algebra

    NASA Astrophysics Data System (ADS)

    Lü, JiaFeng; Wang, XingTing; Zhuang, GuangBin

    2016-05-01

    In this paper, we introduce the notions of differential graded (DG) Poisson algebra and DG Poisson module. Let $A$ be any DG Poisson algebra. We construct the universal enveloping algebra of $A$ explicitly, which is denoted by $A^{ue}$. We show that $A^{ue}$ has a natural DG algebra structure and it satisfies certain universal property. As a consequence of the universal property, it is proved that the category of DG Poisson modules over $A$ is isomorphic to the category of DG modules over $A^{ue}$. Furthermore, we prove that the notion of universal enveloping algebra $A^{ue}$ is well-behaved under opposite algebra and tensor product of DG Poisson algebras. Practical examples of DG Poisson algebras are given throughout the paper including those arising from differential geometry and homological algebra.

  17. Fitting additive hazards models for case-cohort studies: a multiple imputation approach.

    PubMed

    Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook

    2016-07-30

    In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26194861

  18. Anisotropy of Poisson's Ratio in Transversely Isotropic Rocks

    NASA Astrophysics Data System (ADS)

    Tokmakova, S. P.

    2008-06-01

    The Poisson's ratio of shales with different clay mineralogy and porosity and for many shale rocks around the world including brine-saturated Africa shales and sands, North Sea shales, gas- and brine-saturated Canadian carbonates were estimated from the values of Thomsen's parameters. Anisotropy of Poisson's ratio for a set of TI samples with "normal" and "anomalous" polarization", with "normal" values of Poisson's ratio and auxetic were calculated.

  19. Analyzing Seasonal Variations in Suicide With Fourier Poisson Time-Series Regression: A Registry-Based Study From Norway, 1969-2007.

    PubMed

    Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo

    2015-08-01

    Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components. PMID:26081677

  20. Heterogeneous PVA hydrogels with micro-cells of both positive and negative Poisson's ratios.

    PubMed

    Ma, Yanxuan; Zheng, Yudong; Meng, Haoye; Song, Wenhui; Yao, Xuefeng; Lv, Hexiang

    2013-07-01

    Many models describing the deformation of general foam or auxetic materials are based on the assumption of homogeneity and order within the materials. However, non-uniform heterogeneity is often an inherent nature in many porous materials and composites, but difficult to measure. In this work, inspired by the structures of auxetic materials, the porous PVA hydrogels with internal inby-concave pores (IICP) or interconnected pores (ICP) were designed and processed. The deformation of the PVA hydrogels under compression was tested and their Poisson's ratio was characterized. The results indicated that the size, shape and distribution of the pores in the hydrogel matrix had strong influence on the local Poisson's ratio, which varying from positive to negative at micro-scale. The size-dependency of their local Poisson's ratio reflected and quantified the uniformity and heterogeneity of the micro-porous structures in the PVA hydrogels. PMID:23648366

  1. Pointwise estimates of solutions for the multi-dimensional bipolar Euler-Poisson system

    NASA Astrophysics Data System (ADS)

    Wu, Zhigang; Li, Yeping

    2016-06-01

    In the paper, we consider a multi-dimensional bipolar hydrodynamic model from semiconductor devices and plasmas. This system takes the form of Euler-Poisson with electric field and frictional damping added to the momentum equations. By making a new analysis on Green's functions for the Euler system with damping and the Euler-Poisson system with damping, we obtain the pointwise estimates of the solution for the multi-dimensions bipolar Euler-Poisson system. As a by-product, we extend decay rates of the densities {ρ_i(i=1,2)} in the usual L 2-norm to the L p -norm with {p≥1} and the time-decay rates of the momentums m i ( i = 1,2) in the L 2-norm to the L p -norm with p > 1 and all of the decay rates here are optimal.

  2. Dielectric Boundary Forces in Numerical Poisson-Boltzmann Methods: Theory and Numerical Strategies.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-10-01

    Continuum modeling of electrostatic interactions based upon the numerical solutions of the Poisson-Boltzmann equation has been widely adopted in biomolecular applications. To extend their applications to molecular dynamics and energy minimization, robust and efficient methodologies to compute solvation forces must be developed. In this study, we have first reviewed the theory for the computation of dielectric boundary forces based on the definition of the Maxwell stress tensor. This is followed by a new formulation of the dielectric boundary force suitable for the finite-difference Poisson-Boltzmann methods. We have validated the new formulation with idealized analytical systems and realistic molecular systems. PMID:22125339

  3. Dielectric boundary force in numerical Poisson-Boltzmann methods: Theory and numerical strategies

    NASA Astrophysics Data System (ADS)

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-10-01

    Continuum modeling of electrostatic interactions based upon the numerical solutions of the Poisson-Boltzmann equation has been widely adopted in biomolecular applications. To extend their applications to molecular dynamics and energy minimization, robust and efficient methodologies to compute solvation forces must be developed. In this study, we have first reviewed the theory for the computation of dielectric boundary force based on the definition of the Maxwell stress tensor. This is followed by a new formulation of the dielectric boundary force suitable for the finite-difference Poisson-Boltzmann methods. We have validated the new formulation with idealized analytical systems and realistic molecular systems.

  4. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods. PMID:24723530

  5. Stochastic search with Poisson and deterministic resetting

    NASA Astrophysics Data System (ADS)

    Bhat, Uttam; De Bacco, Caterina; Redner, S.

    2016-08-01

    We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.

  6. Efficient information transfer by Poisson neurons.

    PubMed

    Kostal, Lubomir; Shinomoto, Shigeru

    2016-06-01

    Recently, it has been suggested that certain neurons with Poissonian spiking statistics may communicate by discontinuously switching between two levels of firing intensity. Such a situation resembles in many ways the optimal information transmission protocol for the continuous-time Poisson channel known from information theory. In this contribution we employ the classical information-theoretic results to analyze the efficiency of such a transmission from different perspectives, emphasising the neurobiological viewpoint. We address both the ultimate limits, in terms of the information capacity under metabolic cost constraints, and the achievable bounds on performance at rates below capacity with fixed decoding error probability. In doing so we discuss optimal values of experimentally measurable quantities that can be compared with the actual neuronal recordings in a future effort. PMID:27106184

  7. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis

    PubMed Central

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-01-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064

  8. Continuum description of the Poisson's ratio of ligament and tendon under finite deformation.

    PubMed

    Swedberg, Aaron M; Reese, Shawn P; Maas, Steve A; Ellis, Benjamin J; Weiss, Jeffrey A

    2014-09-22

    Ligaments and tendons undergo volume loss when stretched along the primary fiber axis, which is evident by the large, strain-dependent Poisson's ratios measured during quasi-static tensile tests. Continuum constitutive models that have been used to describe ligament material behavior generally assume incompressibility, which does not reflect the volumetric material behavior seen experimentally. We developed a strain energy equation that describes large, strain dependent Poisson's ratios and nonlinear, transversely isotropic behavior using a novel method to numerically enforce the desired volumetric behavior. The Cauchy stress and spatial elasticity tensors for this strain energy equation were derived and implemented in the FEBio finite element software (www.febio.org). As part of this objective, we derived the Cauchy stress and spatial elasticity tensors for a compressible transversely isotropic material, which to our knowledge have not appeared previously in the literature. Elastic simulations demonstrated that the model predicted the nonlinear, upwardly concave uniaxial stress-strain behavior while also predicting a strain-dependent Poisson's ratio. Biphasic simulations of stress relaxation predicted a large outward fluid flux and substantial relaxation of the peak stress. Thus, the results of this study demonstrate that the viscoelastic behavior of ligaments and tendons can be predicted by modeling fluid movement when combined with a large Poisson's ratio. Further, the constitutive framework provides the means for accurate simulations of ligament volumetric material behavior without the need to resort to micromechanical or homogenization methods, thus facilitating its use in large scale, whole joint models. PMID:25134434

  9. Continuum Description of the Poisson's Ratio of Ligament and Tendon Under Finite Deformation

    PubMed Central

    Swedberg, Aaron M.; Reese, Shawn P.; Maas, Steve A.; Ellis, Benjamin J.; Weiss, Jeffrey A.

    2014-01-01

    Ligaments and tendons undergo volume loss when stretched along the primary fiber axis, which is evident by the large, strain-dependent Poisson's ratios measured during quasi-static tensile tests. Continuum constitutive models that have been used to describe ligament material behavior generally assume incompressibility, which does not reflect the volumetric material behavior seen experimentally. We developed a strain energy equation that describes large, strain dependent Poisson's ratios and nonlinear, transversely isotropic behavior using a novel method to numerically enforce the desired volumetric behavior. The Cauchy stress and spatial elasticity tensors for this strain energy equation were derived and implemented in the FEBio finite element software (www.febio.org). As part of this objective, we derived the Cauchy stress and spatial elasticity tensors for a compressible transversely isotropic material, which to our knowledge have not appeared previously in the literature. Elastic simulations demonstrated that the model predicted the nonlinear, upwardly concave uniaxial stress-strain behavior while also predicting a strain-dependent Poisson's ratio. Biphasic simulations of stress relaxation predicted a large outward fluid flux and substantial relaxation of the peak stress. Thus, the results of this study demonstrate that the viscoelastic behavior of ligaments and tendons can be predicted by modeling fluid movement when combined with a large Poisson's ratio. Further, the constitutive framework provides the means for accurate simulations of ligament volumetric material behavior without the need to resort to micromechanical or homogenization methods, thus facilitating its use in large scale, whole joint models. PMID:25134434

  10. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

    NASA Technical Reports Server (NTRS)

    Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

    1993-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

  11. Updating a Classic: "The Poisson Distribution and the Supreme Court" Revisited

    ERIC Educational Resources Information Center

    Cole, Julio H.

    2010-01-01

    W. A. Wallis studied vacancies in the US Supreme Court over a 96-year period (1837-1932) and found that the distribution of the number of vacancies per year could be characterized by a Poisson model. This note updates this classic study.

  12. Testing a Gender Additive Model: The Role of Body Image in Adolescent Depression

    ERIC Educational Resources Information Center

    Bearman, Sarah Kate; Stice, Eric

    2008-01-01

    Despite consistent evidence that adolescent girls are at greater risk of developing depression than adolescent boys, risk factor models that account for this difference have been elusive. The objective of this research was to examine risk factors proposed by the "gender additive" model of depression that attempts to partially explain the increased…

  13. Application of the sine-Poisson equation in solar magnetostatics

    NASA Technical Reports Server (NTRS)

    Webb, G. M.; Zank, G. P.

    1990-01-01

    Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.

  14. Nonstationary elementary-field light randomly triggered by Poisson impulses.

    PubMed

    Fernández-Pousa, Carlos R

    2013-05-01

    A stochastic theory of nonstationary light describing the random emission of elementary pulses is presented. The emission is governed by a nonhomogeneous Poisson point process determined by a time-varying emission rate. The model describes, in the appropriate limits, stationary, cyclostationary, locally stationary, and pulsed radiation, and reduces to a Gaussian theory in the limit of dense emission rate. The first- and second-order coherence theories are solved after the computation of second- and fourth-order correlation functions by use of the characteristic function. The ergodicity of second-order correlations under various types of detectors is explored and a number of observables, including optical spectrum, amplitude, and intensity correlations, are analyzed. PMID:23695325

  15. Numerical Solution of the Gyrokinetic Poisson Equation in TEMPEST

    NASA Astrophysics Data System (ADS)

    Dorr, Milo; Cohen, Bruce; Cohen, Ronald; Dimits, Andris; Hittinger, Jeffrey; Kerbel, Gary; Nevins, William; Rognlien, Thomas; Umansky, Maxim; Xiong, Andrew; Xu, Xueqiao

    2006-10-01

    The gyrokinetic Poisson (GKP) model in the TEMPEST continuum gyrokinetic edge plasma code yields the electrostatic potential due to the charge density of electrons and an arbitrary number of ion species including the effects of gyroaveraging in the limit kρ1. The TEMPEST equations are integrated as a differential algebraic system involving a nonlinear system solve via Newton-Krylov iteration. The GKP preconditioner block is inverted using a multigrid preconditioned conjugate gradient (CG) algorithm. Electrons are treated as kinetic or adiabatic. The Boltzmann relation in the adiabatic option employs flux surface averaging to maintain neutrality within field lines and is solved self-consistently with the GKP equation. A decomposition procedure circumvents the near singularity of the GKP Jacobian block that otherwise degrades CG convergence.

  16. Dual Poisson-Disk Tiling: an efficient method for distributing features on arbitrary surfaces.

    PubMed

    Li, Hongwei; Lo, Kui-Yip; Leung, Man-Kang; Fu, Chi-Wing

    2008-01-01

    This paper introduces a novel surface-modeling method to stochastically distribute features on arbitrary topological surfaces. The generated distribution of features follows the Poisson disk distribution, so we can have a minimum separation guarantee between features and avoid feature overlap. With the proposed method, we not only can interactively adjust and edit features with the help of the proposed Poisson disk map, but can also efficiently re-distribute features on object surfaces. The underlying mechanism is our dual tiling scheme, known as the Dual Poisson-Disk Tiling. First, we compute the dual of a given surface parameterization, and tile the dual surface by our specially-designed dual tiles; during the pre-processing, the Poisson disk distribution has been pre-generated on these tiles. By dual tiling, we can nicely avoid the problem of corner heterogeneity when tiling arbitrary parameterized surfaces, and can also reduce the tile set complexity. Furthermore, the dual tiling scheme is non-periodic, and we can also maintain a manageable tile set. To demonstrate the applicability of this technique, we explore a number of surface-modeling applications: pattern and shape distribution, bump-mapping, illustrative rendering, mold simulation, the modeling of separable features in texture and BTF, and the distribution of geometric textures in shell space. PMID:18599912

  17. The relationship between truck accidents and geometric design of road sections: Poisson versus negative binomial regressions

    SciTech Connect

    Miaou, Shaw-Pin

    1993-07-01

    This paper evaluates the performance of Poisson and negative binomial (NB) regression models in establishing the relationship between truck accidents and geometric design of road sections. Three types of models are considered. Poisson regression, zero-inflated Poisson (ZIP) regression, and NB regression. Maximum likelihood (ML) method is used to estimate the unknown parameters of these models. Two other feasible estimators for estimating the dispersion parameter in the NB regression model are also examined: a moment estimator and a regression-based estimator. These models and estimators are evaluated based on their (1) estimated regression parameters, (2) overall goodness-of-fit, (3) estimated relative frequency of truck accident involvements across road sections, (4) sensitivity to the inclusion of short mad sections, and (5) estimated total number of truck accident involvements. Data from the highway Safety Information System (HSIS) are employed to examine the performance of these models in developing such relationships. The evaluation results suggest that the NB regression model estimated using the moment and regression-based methods should be used with caution. Also, under the ML method, the estimated regression parameters from all three models are quite consistent and no particular model outperforms the other two models in terms of the estimated relative frequencies of truck accident involvements across road sections. It is recommended that the Poisson regression model be used as an initial model for developing the relationship. If the overdispersion of accident data is found to be moderate or high, both the NB and ZIP regression model could be explored. Overall, the ZIP regression model appears to be a serious candidate model when data exhibit excess zeros due, e.g., to underreporting.

  18. Genomic prediction of growth in pigs based on a model including additive and dominance effects.

    PubMed

    Lopes, M S; Bastiaansen, J W M; Janss, L; Knol, E F; Bovenhuis, H

    2016-06-01

    Independent of whether prediction is based on pedigree or genomic information, the focus of animal breeders has been on additive genetic effects or 'breeding values'. However, when predicting phenotypes rather than breeding values of an animal, models that account for both additive and dominance effects might be more accurate. Our aim with this study was to compare the accuracy of predicting phenotypes using a model that accounts for only additive effects (MA) and a model that accounts for both additive and dominance effects simultaneously (MAD). Lifetime daily gain (DG) was evaluated in three pig populations (1424 Pietrain, 2023 Landrace, and 2157 Large White). Animals were genotyped using the Illumina SNP60K Beadchip and assigned to either a training data set to estimate the genetic parameters and SNP effects, or to a validation data set to assess the prediction accuracy. Models MA and MAD applied random regression on SNP genotypes and were implemented in the program Bayz. The additive heritability of DG across the three populations and the two models was very similar at approximately 0.26. The proportion of phenotypic variance explained by dominance effects ranged from 0.04 (Large White) to 0.11 (Pietrain), indicating that importance of dominance might be breed-specific. Prediction accuracies were higher when predicting phenotypes using total genetic values (sum of breeding values and dominance deviations) from the MAD model compared to using breeding values from both MA and MAD models. The highest increase in accuracy (from 0.195 to 0.222) was observed in the Pietrain, and the lowest in Large White (from 0.354 to 0.359). Predicting phenotypes using total genetic values instead of breeding values in purebred data improved prediction accuracy and reduced the bias of genomic predictions. Additional benefit of the method is expected when applied to predict crossbred phenotypes, where dominance levels are expected to be higher. PMID:26676611

  19. The Poisson Gamma distribution for wind speed data

    NASA Astrophysics Data System (ADS)

    Ćakmakyapan, Selen; Özel, Gamze

    2016-04-01

    The wind energy is one of the most significant alternative clean energy source and rapidly developing renewable energy sources in the world. For the evaluation of wind energy potential, probability density functions (pdfs) are usually used to model wind speed distributions. The selection of the appropriate pdf reduces the wind power estimation error and also allow to achieve characteristics. In the literature, different pdfs used to model wind speed data for wind energy applications. In this study, we propose a new probability distribution to model the wind speed data. Firstly, we defined the new probability distribution named Poisson-Gamma (PG) distribution and we analyzed a wind speed data sets which are about five pressure degree for the station. We obtained the data sets from Turkish State Meteorological Service. Then, we modelled the data sets with Exponential, Weibull, Lomax, 3 parameters Burr, Gumbel, Gamma, Rayleigh which are used to model wind speed data, and PG distributions. Finally, we compared the distribution, to select the best fitted model and demonstrated that PG distribution modeled the data sets better.

  20. Modeling oxygen dissolution and biological uptake during pulse oxygen additions in oenological fermentations.

    PubMed

    Saa, Pedro A; Moenne, M Isabel; Pérez-Correa, J Ricardo; Agosin, Eduardo

    2012-09-01

    Discrete oxygen additions during oenological fermentations can have beneficial effects both on yeast performance and on the resulting wine quality. However, the amount and time of the additions must be carefully chosen to avoid detrimental effects. So far, most oxygen additions are carried out empirically, since the oxygen dynamics in the fermenting must are not completely understood. To efficiently manage oxygen dosage, we developed a mass balance model of the kinetics of oxygen dissolution and biological uptake during wine fermentation on a laboratory scale. Model calibration was carried out employing a novel dynamic desorption-absorption cycle based on two optical sensors able to generate enough experimental data for the precise determination of oxygen uptake and volumetric mass transfer coefficients. A useful system for estimating the oxygen solubility in defined medium and musts was also developed and incorporated into the mass balance model. Results indicated that several factors, such as the fermentation phase, wine composition, mixing and carbon dioxide concentration, must be considered when performing oxygen addition during oenological fermentations. The present model will help develop better oxygen addition policies in wine fermentations on an industrial scale. PMID:22349928

  1. Vector generalized additive models for extreme rainfall data analysis (study case rainfall data in Indramayu)

    NASA Astrophysics Data System (ADS)

    Utami, Eka Putri Nur; Wigena, Aji Hamim; Djuraidah, Anik

    2016-02-01

    Rainfall pattern are good indicators for potential disasters. Global Circulation Model (GCM) contains global scale information that can be used to predict the rainfall data. Statistical downscaling (SD) utilizes the global scale information to make inferences in the local scale. Essentially, SD can be used to predict local scale variables based on global scale variables. SD requires a method to accommodate non linear effects and extreme values. Extreme value Theory (EVT) can be used to analyze the extreme value. One of methods to identify the extreme events is peak over threshold that follows Generalized Pareto Distribution (GPD). The vector generalized additive model (VGAM) is an extension of the generalized additive model. It is able to accommodate linear or nonlinear effects by involving more than one additive predictors. The advantage of VGAM is to handle multi response models. The key idea of VGAM are iteratively reweighted least square for maximum likelihood estimation, penalized smoothing, fisher scoring and additive models. This works aims to analyze extreme rainfall data in Indramayu using VGAM. The results show that the VGAM with GPD is able to predict extreme rainfall data accurately. The prediction in February is very close to the actual value at quantile 75.

  2. Integrated reservoir characterization: Improvement in heterogeneities stochastic modelling by integration of additional external constraints

    SciTech Connect

    Doligez, B.; Eschard, R.; Geffroy, F.

    1997-08-01

    The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.

  3. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545

  4. Generalized HPC method for the Poisson equation

    NASA Astrophysics Data System (ADS)

    Bardazzi, A.; Lugni, C.; Antuono, M.; Graziani, G.; Faltinsen, O. M.

    2015-10-01

    An efficient and innovative numerical algorithm based on the use of Harmonic Polynomials on each Cell of the computational domain (HPC method) has been recently proposed by Shao and Faltinsen (2014) [1], to solve Boundary Value Problem governed by the Laplace equation. Here, we extend the HPC method for the solution of non-homogeneous elliptic boundary value problems. The homogeneous solution, i.e. the Laplace equation, is represented through a polynomial function with harmonic polynomials while the particular solution of the Poisson equation is provided by a bi-quadratic function. This scheme has been called generalized HPC method. The present algorithm, accurate up to the 4th order, proved to be efficient, i.e. easy to be implemented and with a low computational effort, for the solution of two-dimensional elliptic boundary value problems. Furthermore, it provides an analytical representation of the solution within each computational stencil, which allows its coupling with existing numerical algorithms within an efficient domain-decomposition strategy or within an adaptive mesh refinement algorithm.

  5. Experimental model and analytic solution for real-time observation of vehicle's additional steer angle

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolong; Li, Liang; Pan, Deng; Cao, Chengmao; Song, Jian

    2014-03-01

    The current research of real-time observation for vehicle roll steer angle and compliance steer angle(both of them comprehensively referred as the additional steer angle in this paper) mainly employs the linear vehicle dynamic model, in which only the lateral acceleration of vehicle body is considered. The observation accuracy resorting to this method cannot meet the requirements of vehicle real-time stability control, especially under extreme driving conditions. The paper explores the solution resorting to experimental method. Firstly, a multi-body dynamic model of a passenger car is built based on the ADAMS/Car software, whose dynamic accuracy is verified by the same vehicle's roadway test data of steady static circular test. Based on this simulation platform, several influencing factors of additional steer angle under different driving conditions are quantitatively analyzed. Then ɛ-SVR algorithm is employed to build the additional steer angle prediction model, whose input vectors mainly include the sensor information of standard electronic stability control system(ESC). The method of typical slalom tests and FMVSS 126 tests are adopted to make simulation, train model and test model's generalization performance. The test result shows that the influence of lateral acceleration on additional steer angle is maximal (the magnitude up to 1°), followed by the longitudinal acceleration-deceleration and the road wave amplitude (the magnitude up to 0.3°). Moreover, both the prediction accuracy and the calculation real-time of the model can meet the control requirements of ESC. This research expands the accurate observation methods of the additional steer angle under extreme driving conditions.

  6. Antimicrobial combinations: Bliss independence and Loewe additivity derived from mechanistic multi-hit models.

    PubMed

    Baeder, Desiree Y; Yu, Guozhi; Hozé, Nathanaël; Rolff, Jens; Regoes, Roland R

    2016-05-26

    Antimicrobial peptides (AMPs) and antibiotics reduce the net growth rate of bacterial populations they target. It is relevant to understand if effects of multiple antimicrobials are synergistic or antagonistic, in particular for AMP responses, because naturally occurring responses involve multiple AMPs. There are several competing proposals describing how multiple types of antimicrobials add up when applied in combination, such as Loewe additivity or Bliss independence. These additivity terms are defined ad hoc from abstract principles explaining the supposed interaction between the antimicrobials. Here, we link these ad hoc combination terms to a mathematical model that represents the dynamics of antimicrobial molecules hitting targets on bacterial cells. In this multi-hit model, bacteria are killed when a certain number of targets are hit by antimicrobials. Using this bottom-up approach reveals that Bliss independence should be the model of choice if no interaction between antimicrobial molecules is expected. Loewe additivity, on the other hand, describes scenarios in which antimicrobials affect the same components of the cell, i.e. are not acting independently. While our approach idealizes the dynamics of antimicrobials, it provides a conceptual underpinning of the additivity terms. The choice of the additivity term is essential to determine synergy or antagonism of antimicrobials.This article is part of the themed issue 'Evolutionary ecology of arthropod antimicrobial peptides'. PMID:27160596

  7. Cognitive vulnerability to depression: A comparison of the weakest link, keystone and additive models

    PubMed Central

    Reilly, Laura C.; Ciesla, Jeffrey A.; Felton, Julia W.; Weitlauf, Amy S.; Anderson, Nicholas L.

    2014-01-01

    Multiple theories of cognitive vulnerability to depression have been proposed, each focusing on different aspects of negative cognition and utilising different measures of risk. Various methods of integrating such multiple indices of risk have been examined in the literature, and each demonstrates some promise. Yet little is known about the interrelations among these methods, or their incremental validity in predicting changes in depression. The present study compared three integrative models of cognitive vulnerability: the additive, weakest link, and keystone models. Support was found for each model as predictive of depression over time, but only the weakest link model demonstrated incremental utility in predicting changes in depression over the other models. We also explore the correlation between these models and each model’s unique contribution to predicting onset of depressive symptoms. PMID:21851251

  8. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    PubMed

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-01

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method. PMID:25061254

  9. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data

    PubMed Central

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2012-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976

  10. Formation and reduction of carcinogenic furan in various model systems containing food additives.

    PubMed

    Kim, Jin-Sil; Her, Jae-Young; Lee, Kwang-Geun

    2015-12-15

    The aim of this study was to analyse and reduce furan in various model systems. Furan model systems consisting of monosaccharides (0.5M glucose and ribose), amino acids (0.5M alanine and serine) and/or 1.0M ascorbic acid were heated at 121°C for 25 min. The effects of food additives (each 0.1M) such as metal ions (iron sulphate, magnesium sulphate, zinc sulphate and calcium sulphate), antioxidants (BHT and BHA), and sodium sulphite on the formation of furan were measured. The level of furan formed in the model systems was 6.8-527.3 ng/ml. The level of furan in the model systems of glucose/serine and glucose/alanine increased 7-674% when food additives were added. In contrast, the level of furan decreased by 18-51% in the Maillard reaction model systems that included ribose and alanine/serine with food additives except zinc sulphate. PMID:26190608

  11. Modeling Longitudinal Data with Generalized Additive Models: Applications to Single-Case Designs

    ERIC Educational Resources Information Center

    Sullivan, Kristynn J.; Shadish, William R.

    2013-01-01

    Single case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time both in the presence and absence of treatment. For a variety of reasons, interest in the statistical analysis and meta-analysis of these designs has been growing in recent years. This paper proposes modeling SCD data with…

  12. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  13. Poisson Ratio of Epitaxial Germanium Films Grown on Silicon

    NASA Astrophysics Data System (ADS)

    Bharathan, Jayesh; Narayan, Jagdish; Rozgonyi, George; Bulman, Gary E.

    2013-01-01

    An accurate knowledge of elastic constants of thin films is important in understanding the effect of strain on material properties. We have used residual thermal strain to measure the Poisson ratio of Ge films grown on Si ⟨001⟩ substrates, using the sin2 ψ method and high-resolution x-ray diffraction. The Poisson ratio of the Ge films was measured to be 0.25, compared with the bulk value of 0.27. Our study indicates that use of Poisson ratio instead of bulk compliance values yields a more accurate description of the state of in-plane strain present in the film.

  14. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  15. Optimal dispersion with minimized Poisson equations for non-hydrostatic free surface flows

    NASA Astrophysics Data System (ADS)

    Cui, Haiyang; Pietrzak, J. D.; Stelling, G. S.

    2014-09-01

    A non-hydrostatic shallow-water model is proposed to simulate the wave propagation in situations where the ratio of the wave length to the water depth is small. It exploits the reduced-size stencil in the Poisson pressure solver to make the model less expensive in terms of memory and CPU time. We refer to this new technique as the minimized Poisson equations formulation. In the simplest case when the method applied to a two-layer model, the new model requires the same computational effort as depth-integrated non-hydrostatic models, but can provide a much better description of dispersive waves. To allow an easy implementation of the new method in depth-integrated models, the governing equations are transformed into a depth-integrated system, in which the velocity difference serves as an extra variable. The non-hydrostatic shallow-water model with minimized Poisson equations formulation produces good results in a series of numerical experiments, including a standing wave in a basin, a non-linear wave test, solitary wave propagation in a channel and a wave propagation over a submerged bar.

  16. Existence of Rotating Planet Solutions to the Euler-Poisson Equations with an Inner Hard Core

    NASA Astrophysics Data System (ADS)

    Wu, Yilun

    2016-01-01

    The Euler-Poisson equations model rotating gaseous stars. Numerous efforts have been made to establish the existence and properties of the rotating star solutions. Recent interests in extrasolar planet structures require extension of the model to include an inner rocky core together with its own gravitational potential. In this paper, we discuss various extensions of the classical rotating star results to incorporate a solid core.

  17. Microscopic dynamics perspective on the relationship between Poisson's ratio and ductility of metallic glasses

    NASA Astrophysics Data System (ADS)

    Ngai, K. L.; Wang, Li-Min; Liu, Riping; Wang, W. H.

    2014-01-01

    In metallic glasses a clear correlation had been established between plasticity or ductility with the Poisson's ratio νPoisson and alternatively the ratio of the elastic bulk modulus to the shear modulus, K/G. Such a correlation between these two macroscopic mechanical properties is intriguing and is challenging to explain from the dynamics on a microscopic level. A recent experimental study has found a connection of ductility to the secondary β-relaxation in metallic glasses. The strain rate and temperature dependencies of the ductile-brittle transition are similar to the reciprocal of the secondary β-relaxation time, τβ. Moreover, metallic glass is more ductile if the relaxation strength of the β-relaxation is larger and τβ is shorter. The findings indicate the β-relaxation is related to and instrumental for ductility. On the other hand, K/G or νPoisson is related to the effective Debye-Waller factor (i.e., the non-ergodicity parameter), f0, characterizing the dynamics of a structural unit inside a cage formed by other units, and manifested as the nearly constant loss shown in the frequency dependent susceptibility. We make the connection of f0 to the non-exponentiality parameter n in the Kohlrausch stretched exponential correlation function of the structural α-relaxation function, φ (t) = exp [ { - ( {t/{τ _α }})^{1 - n} }]. This connection follows from the fact that both f0 and n are determined by the inter-particle potential, and 1/f0 or (1 - f0) and n both increase with anharmonicity of the potential. A well tested result from the Coupling Model is used to show that τβ is completely determined by τα and n. From the string of relations, (i) K/G or νPoisson with 1/f0 or (1 - f0), (ii) 1/f0 or (1 - f0) with n, and (iii) τα and n with τβ, we arrive at the desired relation between K/G or νPoisson and τβ. On combining this relation with that between ductility and τβ, we have finally an explanation of the empirical correlation between

  18. Negative Poisson's ratios for extreme states of matter

    PubMed

    Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin

    2000-06-16

    Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices. PMID:10856209

  19. Generalized Additive Mixed-Models for Pharmacology Using Integrated Discrete Multiple Organ Co-Culture.

    PubMed

    Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry

    2016-01-01

    Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941

  20. Generalized Additive Mixed-Models for Pharmacology Using Integrated Discrete Multiple Organ Co-Culture

    PubMed Central

    Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry

    2016-01-01

    Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941

  1. Use of additive technologies for practical working with complex models for foundry technologies

    NASA Astrophysics Data System (ADS)

    Olkhovik, E.; Butsanets, A. A.; Ageeva, A. A.

    2016-07-01

    The article presents the results of research of additive technology (3D printing) application for developing a geometrically complex model of castings parts. Investment casting is well known and widely used technology for the production of complex parts. The work proposes the use of a 3D printing technology for manufacturing models parts, which are removed by thermal destruction. Traditional methods of equipment production for investment casting involve the use of manual labor which has problems with dimensional accuracy, and CNC technology which is less used. Such scheme is low productive and demands considerable time. We have offered an alternative method which consists in printing the main knots using a 3D printer (PLA and ABS) with a subsequent production of castings models from them. In this article, the main technological methods are considered and their problems are discussed. The dimensional accuracy of models in comparison with investment casting technology is considered as the main aspect.

  2. State to State and Charged Particle Kinetic Modeling of Time Filtering and Cs Addition

    SciTech Connect

    Capitelli, M.; Gorse, C.; Longo, S.; Diomede, P.; Pagano, D.

    2007-08-10

    We present here an account on the progress of kinetic simulation of non equilibrium plasmas in conditions of interest for negative ion production by using the 1D Bari code for hydrogen plasma simulation. The model includes the state to state kinetics of the vibrational level population of hydrogen molecules, plus a PIC/MCC module for the multispecies dynamics of charged particles. In particular we present new results for the modeling of two issues of great interest: the time filtering and the Cs addition via surface coverage.

  3. Additional interfacial force in lattice Boltzmann models for incompressible multiphase flows.

    PubMed

    Li, Q; Luo, K H; Gao, Y J; He, Y L

    2012-02-01

    The existing lattice Boltzmann models for incompressible multiphase flows are mostly constructed with two distribution functions: one is the order parameter distribution function, which is used to track the interface between different phases, and the other is the pressure distribution function for solving the velocity field. In this paper, it is shown that in these models the recovered momentum equation is inconsistent with the target one: an additional force is included in the recovered momentum equation. The additional force has the following features. First, it is proportional to the macroscopic velocity. Second, it is zero in every single-phase region but is nonzero in the interface. Therefore it can be interpreted as an interfacial force. To investigate the effects of the additional interfacial force, numerical simulations are carried out for the problem of Rayleigh-Taylor instability, droplet splashing on a thin liquid film, and the evolution of a falling droplet under gravity. Numerical results demonstrate that, with the increase of the velocity or the Reynolds number, the additional interfacial force will gradually have an important influence on the interface and affect the numerical accuracy. PMID:22463354

  4. Model for Assembly Line Re-Balancing Considering Additional Capacity and Outsourcing to Face Demand Fluctuations

    NASA Astrophysics Data System (ADS)

    Samadhi, TMAA; Sumihartati, Atin

    2016-02-01

    The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..

  5. Bicrossed products induced by Poisson vector fields and their integrability

    NASA Astrophysics Data System (ADS)

    Djiba, Samson Apourewagne; Wade, Aïssa

    2016-01-01

    First we show that, associated to any Poisson vector field E on a Poisson manifold (M,π), there is a canonical Lie algebroid structure on the first jet bundle J1M which, depends only on the cohomology class of E. We then introduce the notion of a cosymplectic groupoid and we discuss the integrability of the first jet bundle into a cosymplectic groupoid. Finally, we give applications to Atiyah classes and L∞-algebras.

  6. Classification of linearly compact simple Nambu-Poisson algebras

    NASA Astrophysics Data System (ADS)

    Cantarini, Nicoletta; Kac, Victor G.

    2016-05-01

    We introduce the notion of a universal odd generalized Poisson superalgebra associated with an associative algebra A, by generalizing a construction made in the work of De Sole and Kac [Jpn. J. Math. 8, 1-145 (2013)]. By making use of this notion we give a complete classification of simple linearly compact (generalized) n-Nambu-Poisson algebras over an algebraically closed field of characteristic zero.

  7. Estimation of adjusted rate differences using additive negative binomial regression.

    PubMed

    Donoghoe, Mark W; Marschner, Ian C

    2016-08-15

    Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27073156

  8. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, A.; Edwards, T.C., Jr.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  9. A patch-based cross masking model for natural images with detail loss and additive defects

    NASA Astrophysics Data System (ADS)

    Liu, Yucheng; Allebach, Jan P.

    2015-03-01

    Visual masking is an effect that contents of the image reduce the detectability of a given target signal hidden in the image. The effect of visual masking has found its application in numerous image processing and vision tasks. In the past few decades, numerous research has been conducted on visual masking based on models optimized for artificial targets placed upon unnatural masks. Over the years, there is a tendency to apply masking model to predict natural image quality and detection threshold of distortion presented in natural images. However, to our knowledge few studies have been conducted to understand the generalizability of masking model to different types of distortion presented in natural images. In this work, we measure the ability of natural image patches in masking three different types of distortion, and analyse the performance of conventional gain control model in predicting the distortion detection threshold. We then propose a new masking model, where detail loss and additive defects are modeled in two parallel vision channels and interact with each other via a cross masking mechanism. We show that the proposed cross masking model has better adaptability to various image structures and distortions in natural scenes.

  10. Mixed-effects Poisson regression analysis of adverse event reports

    PubMed Central

    Gibbons, Robert D.; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K.; Bhaumik, Dulal K.; Brown, C. Hendricks; Kapur, Kush; Marcus, Sue M.; Hur, Kwan; Mann, J. John

    2008-01-01

    SUMMARY A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622

  11. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers.

    PubMed

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2010-01-12

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  12. Evaporation model for beam based additive manufacturing using free surface lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Klassen, Alexander; Scharowsky, Thorsten; Körner, Carolin

    2014-07-01

    Evaporation plays an important role in many technical applications including beam-based additive manufacturing processes, such as selective electron beam or selective laser melting (SEBM/SLM). In this paper, we describe an evaporation model which we employ within the framework of a two-dimensional free surface lattice Boltzmann method. With this method, we solve the hydrodynamics as well as thermodynamics of the molten material taking into account the mass and energy losses due to evaporation and the recoil pressure acting on the melt pool. Validation of the numerical model is performed by measuring maximum melt depths and evaporative losses in samples of pure titanium and Ti-6Al-4V molten by an electron beam. Finally, the model is applied to create processing maps for an SEBM process. The results predict that the penetration depth of the electron beam, which is a function of the acceleration voltage, has a significant influence on evaporation effects.

  13. Parity Symmetry and Parity Breaking in the Quantum Rabi Model with Addition of Ising Interaction

    NASA Astrophysics Data System (ADS)

    Wang, Qiong; He, Zhi; Yao, Chun-Mei

    2015-04-01

    We explore the possibility to generate new parity symmetry in the quantum Rabi model after a bias is introduced. In contrast to a mathematical treatment in a previous publication [J. Phys. A 46 (2013) 265302], we consider a physically realistic method by involving an additional spin into the quantum Rabi model to couple with the original spin by an Ising interaction, and then the parity symmetry is broken as well as the scaling behavior of the ground state by introducing a bias. The rule can be found that the parity symmetry is broken by introducing a bias and then restored by adding new degrees of freedom. Experimental feasibility of realizing the models under discussion is investigated. Supported by the National Natural Science Foundation of China under Grant Nos. 61475045 and 11347142, the Natural Science Foundation of Hunan Province, China under Grant No. 2015JJ3092

  14. Poisson Green's function method for increased computational efficiency in numerical calculations of Coulomb coupling elements

    NASA Astrophysics Data System (ADS)

    Zimmermann, Anke; Kuhn, Sandra; Richter, Marten

    2016-01-01

    Often, the calculation of Coulomb coupling elements for quantum dynamical treatments, e.g., in cluster or correlation expansion schemes, requires the evaluation of a six dimensional spatial integral. Therefore, it represents a significant limiting factor in quantum mechanical calculations. If the size or the complexity of the investigated system increases, many coupling elements need to be determined. The resulting computational constraints require an efficient method for a fast numerical calculation of the Coulomb coupling. We present a computational method to reduce the numerical complexity by decreasing the number of spatial integrals for arbitrary geometries. We use a Green's function formulation of the Coulomb coupling and introduce a generalized scalar potential as solution of a generalized Poisson equation with a generalized charge density as the inhomogeneity. That enables a fast calculation of Coulomb coupling elements and, additionally, a straightforward inclusion of boundary conditions and arbitrarily spatially dependent dielectrics through the Coulomb Green's function. Particularly, if many coupling elements are included, the presented method, which is not restricted to specific symmetries of the model, presents a promising approach for increasing the efficiency of numerical calculations of the Coulomb interaction. To demonstrate the wide range of applications, we calculate internanostructure couplings, such as the Förster coupling, and illustrate the inclusion of symmetry considerations in the method for the Coulomb coupling between bound quantum dot states and unbound continuum states.

  15. Coagulation kinetics beyond mean field theory using an optimised Poisson representation

    NASA Astrophysics Data System (ADS)

    Burnett, James; Ford, Ian J.

    2015-05-01

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable "gauge" transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.

  16. AFMPB: An adaptive fast multipole Poisson-Boltzmann solver for calculating electrostatics in biomolecular systems

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew

    2013-11-01

    A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of the fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/lubz/afmpb.html for updates and changes. Running time: The running time varies with the number of discretized elements (N) in the system and their distributions. In most cases, it scales linearly as a function of N.

  17. Hydrophobic interactions in model enclosures from small to large length scales: non-additivity in explicit and implicit solvent models

    PubMed Central

    Wang, Lingle; Friesner, Richard A.; Berne, B. J.

    2011-01-01

    The binding affinities between a united-atom methane and various model hydrophobic enclosures were studied through high accuracy free energy perturbation methods (FEP). We investigated the non-additivity of the hydrophobic interaction in these systems, measured by the deviation of its binding affinity from that predicted by the pairwise additivity approximation. While only small non-additivity effects were previously reported in the interactions in methane trimers, we found large cooperative effects (as large as −1.14 kcal mol−1 or approximately a 25% increase in the binding affinity) and anti-cooperative effects (as large as 0.45 kcal mol−1) for these model enclosed systems. Decomposition of the total potential of mean force (PMF) into increasing orders of multi-body interactions indicates that the contributions of the higher order multi-body interactions can be either positive or negative in different systems, and increasing the order of multi-body interactions considered did not necessarily improve the accuracy. A general correlation between the sign of the non-additivity effect and the curvature of the solute molecular surface was observed. We found that implicit solvent models based on the molecular surface area (MSA) performed much better, not only in predicting binding affinities, but also in predicting the non-additivity effects, compared with models based on the solvent accessible surface area (SASA), suggesting that MSA is a better descriptor of the curvature of the solutes. We also show how the non-additivity contribution changes as the hydrophobicity of the plate is decreased from the dewetting regime to the wetting regime. PMID:21043426

  18. Testing Departure from Additivity in Tukey’s Model using Shrinkage: Application to a Longitudinal Setting

    PubMed Central

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A.; Park, Sung Kyun; Kardia, Sharon L.R.; Allison, Matthew A.; Vokonas, Pantel S.; Chen, Jinbo; Diez-Roux, Ana V.

    2014-01-01

    While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey’s one degree of freedom (df) model for non-additivity treats the interaction term as a scaled product of row and column main effects. Due to the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency and the corresponding test could lead to increased power. Unfortunately, Tukey’s model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey’s and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies — the Normative Aging Study and the Multi-Ethnic Study of Atherosclerosis. PMID:25112650

  19. Poisson's ratio prediction through dual stimulated fuzzy logic by ACE and GA-PS

    NASA Astrophysics Data System (ADS)

    Bagheripour, Parisa; Asoodeh, Mojtaba

    2014-08-01

    Poisson's ratio is one of the most important rock mechanical parameters having significance in both planning and post analysis of wellbore operations. Laboratory measurement of this parameter covers a broad range of costs, including sidewall sampling, preservation, and laboratory tests. This study proposes an improved strategy, called dual stimulated fuzzy logic by ACE and GA-PS for determining Poisson's ratio from conventional well log data in a rapid, precise, and cost-effective way. Firstly, conventional well log data are transformed to a higher correlated data space with Poisson's ratio through the use of alternative condition expectation (ACE) algorithm. This step simplifies the convoluted space of the problem and makes it easier to solve for fuzzy logic. Subsequently, transformed conventional well log data are fed to fuzzy logic model. To ensure that optimal fuzzy model is constructed, a hybrid genetic algorithm-pattern search (GA-PS) technique is employed for extracting fuzzy clusters (or rules). This step sets fuzzy logic to its optimal performance. The propounded strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. A comparison between present model and previous models showed superiority of current study.

  20. Wall-models for large eddy simulation based on a generic additive-filter formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Rocha, Martin

    Based on the philosophy of only resolving the large scales of turbulent motion, Large Eddy Simulation (LES) has demonstrated potential to provide high-fidelity turbulence simulations at low computational cost. However, when the scales that control the turbulence in a particular flow are not large, LES has to increase significantly its computational cost to provide accurate predictions. This is the case in wall-bounded flows, where the grid resolution required by LES to resolve the near-wall structures is close to the requirements to resolve the smallest dissipative scales in turbulence. Therefore, to reduce this demanding requirement, it has been proposed to model the near-wall region with Reynolds-Averaged Navier-Stokes (RANS) models, in what is known as hybrid RANS/LES approach. In this work, the mathematical implications of merging two different turbulence modeling approaches are addressed by deriving the exact hybrid RANS/LES Navier-Stokes equations. These equations are derived by introducing an additive-filter, which linearly combines the RANS and LES operators with a blending function. The equations derived with the additive-filter predict additional hybrid terms, which represent the interactions between RANS and LES formulations. Theoretically, the prediction of the hybrid terms demonstrates that the hybridization of the two approaches cannot be accomplished only by the turbulence model equations, as it is claimed in current hybrid RANS/LES models. The importance of the exact hybrid RANS/LES equations is demonstrated by conducting numerical calculations on a turbulent flat-plate boundary layer. Results indicate that the hybrid terms help to maintain an equilibrated model transition when the hybrid formulation switches from RANS to LES. Results also indicate, that when the hybrid terms are not included, the accuracy of the calculations strongly relies on the blending function implemented in the additive-filter. On the other hand, if the exact equations are

  1. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    NASA Astrophysics Data System (ADS)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  2. Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data

    PubMed Central

    TAPAK, Leili; MAHJUB, Hossein; SADEGHIFAR, Majid; SAIDIJAM, Massoud; POOROLAJAL, Jalal

    2016-01-01

    Background: One substantial part of microarray studies is to predict patients’ survival based on their gene expression profile. Variable selection techniques are powerful tools to handle high dimensionality in analysis of microarray data. However, these techniques have not been investigated in competing risks setting. This study aimed to investigate the performance of four sparse variable selection methods in estimating the survival time. Methods: The data included 1381 gene expression measurements and clinical information from 301 patients with bladder cancer operated in the years 1987 to 2000 in hospitals in Denmark, Sweden, Spain, France, and England. Four methods of the least absolute shrinkage and selection operator, smoothly clipped absolute deviation, the smooth integration of counting and absolute deviation and elastic net were utilized for simultaneous variable selection and estimation under an additive hazards model. The criteria of area under ROC curve, Brier score and c-index were used to compare the methods. Results: The median follow-up time for all patients was 47 months. The elastic net approach was indicated to outperform other methods. The elastic net had the lowest integrated Brier score (0.137±0.07) and the greatest median of the over-time AUC and C-index (0.803±0.06 and 0.779±0.13, respectively). Five out of 19 selected genes by the elastic net were significant (P<0.05) under an additive hazards model. It was indicated that the expression of RTN4, SON, IGF1R and CDC20 decrease the survival time, while the expression of SMARCAD1 increase it. Conclusion: The elastic net had higher capability than the other methods for the prediction of survival time in patients with bladder cancer in the presence of competing risks base on additive hazards model. PMID:27114989

  3. Comparison of prosthetic models produced by traditional and additive manufacturing methods

    PubMed Central

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong

    2015-01-01

    PURPOSE The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). MATERIALS AND METHODS Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). RESULTS The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. CONCLUSION The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing. PMID:26330976

  4. Understanding the changes in ductility and Poisson's ratio of metallic glasses during annealing from microscopic dynamics

    SciTech Connect

    Wang, Z.; Ngai, K. L.; Wang, W. H.

    2015-07-21

    In the paper K. L. Ngai et al., [J. Chem. 140, 044511 (2014)], the empirical correlation of ductility with the Poisson's ratio, ν{sub Poisson}, found in metallic glasses was theoretically explained by microscopic dynamic processes which link on the one hand ductility, and on the other hand the Poisson's ratio. Specifically, the dynamic processes are the primitive relaxation in the Coupling Model which is the precursor of the Johari–Goldstein β-relaxation, and the caged atoms dynamics characterized by the effective Debye–Waller factor f{sub 0} or equivalently the nearly constant loss (NCL) in susceptibility. All these processes and the parameters characterizing them are accessible experimentally except f{sub 0} or the NCL of caged atoms; thus, so far, the experimental verification of the explanation of the correlation between ductility and Poisson's ratio is incomplete. In the experimental part of this paper, we report dynamic mechanical measurement of the NCL of the metallic glass La{sub 60}Ni{sub 15}Al{sub 25} as-cast, and the changes by annealing at temperature below T{sub g}. The observed monotonic decrease of the NCL with aging time, reflecting the corresponding increase of f{sub 0}, correlates with the decrease of ν{sub Poisson}. This is important observation because such measurements, not made before, provide the missing link in confirming by experiment the explanation of the correlation of ductility with ν{sub Poisson}. On aging the metallic glass, also observed in the isochronal loss spectra is the shift of the β-relaxation to higher temperatures and reduction of the relaxation strength. These concomitant changes of the β-relaxation and NCL are the root cause of embrittlement by aging the metallic glass. The NCL of caged atoms is terminated by the onset of the primitive relaxation in the Coupling Model, which is generally supported by experiments. From this relation, the monotonic decrease of the NCL with aging time is caused by the slowing down

  5. Thermodynamic network model for predicting effects of substrate addition and other perturbations on subsurface microbial communities

    SciTech Connect

    Jack Istok; Melora Park; James McKinley; Chongxuan Liu; Lee Krumholz; Anne Spain; Aaron Peacock; Brett Baldwin

    2007-04-19

    The overall goal of this project is to develop and test a thermodynamic network model for predicting the effects of substrate additions and environmental perturbations on microbial growth, community composition and system geochemistry. The hypothesis is that a thermodynamic analysis of the energy-yielding growth reactions performed by defined groups of microorganisms can be used to make quantitative and testable predictions of the change in microbial community composition that will occur when a substrate is added to the subsurface or when environmental conditions change.

  6. Continental crust composition constrained by measurements of crustal Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Zandt, George; Ammon, Charles J.

    1995-03-01

    DECIPHERING the geological evolution of the Earth's continental crust requires knowledge of its bulk composition and global variability. The main uncertainties are associated with the composition of the lower crust. Seismic measurements probe the elastic properties of the crust at depth, from which composition can be inferred. Of particular note is Poisson's ratio,Σ ; this elastic parameter can be determined uniquely from the ratio of P- to S-wave seismic velocity, and provides a better diagnostic of crustal composition than either P- or S-wave velocity alone1. Previous attempts to measure Σ have been limited by difficulties in obtaining coincident P- and S-wave data sampling the entire crust2. Here we report 76 new estimates of crustal Σ spanning all of the continents except Antarctica. We find that, on average, Σ increases with the age of the crust. Our results strongly support the presence of a mafic lower crust beneath cratons, and suggest either a uniformitarian craton formation process involving delamination of the lower crust during continental collisions, followed by magmatic underplating, or a model in which crust formation processes have changed since the Precambrian era.

  7. Error propagation in PIV-based Poisson pressure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2015-11-01

    After more than 20 years of development, PIV has become a standard non-invasive velocity field measurement technique, and promises to make PIV-based pressure calculations possible. However, the errors inherent in PIV velocity fields propagate through integration and contaminate the calculated pressure field. We propose an analysis that shows how the uncertainties in the velocity field propagate to the pressure field through the Poisson equation. First we model the dynamics of error propagation using boundary value problems (BVPs). Next, L2-norm and/or L∞-norm are utilized as the measure of error in the velocity and pressure field. Finally, using analysis techniques including the maximum principle, the Poincare inequality pressure field can be bounded by the error level of the data by considering the well-posedness of the BVPs. Specifically, we exam if and how the error in the pressure field depend continually on the BVP data. Factors such as flow field geometry, boundary conditions, and velocity field noise levels will be discussed analytically.

  8. Poisson process approximation for sequence repeats, and sequencing by hybridization.

    PubMed

    Arratia, R; Martin, D; Reinert, G; Waterman, M S

    1996-01-01

    Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds. PMID:8891959

  9. Relaxation-time limit in the multi-dimensional bipolar nonisentropic Euler-Poisson systems

    NASA Astrophysics Data System (ADS)

    Li, Yeping; Zhou, Zhiming

    2015-05-01

    In this paper, we consider the multi-dimensional bipolar nonisentropic Euler-Poisson systems, which model various physical phenomena in semiconductor devices, plasmas and channel proteins. We mainly study the relaxation-time limit of the initial value problem for the bipolar full Euler-Poisson equations with well-prepared initial data. Inspired by the Maxwell iteration, we construct the different approximation states for the case τσ = 1 and σ = 1, respectively, and show that periodic initial-value problems of the certain scaled bipolar nonisentropic Euler-Poisson systems in the case τσ = 1 and σ = 1 have unique smooth solutions in the time interval where the classical energy transport equation and the drift-diffusive equation have smooth solution. Moreover, it is also obtained that the smooth solutions converge to those of energy-transport models at the rate of τ2 and those of the drift-diffusive models at the rate of τ, respectively. The proof of these results is based on the continuation principle and the error estimates.

  10. Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra

    NASA Astrophysics Data System (ADS)

    Cho, Eun-Hee; Oh, Sei-Qwon

    2016-07-01

    We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.

  11. Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra

    NASA Astrophysics Data System (ADS)

    Cho, Eun-Hee; Oh, Sei-Qwon

    2016-05-01

    We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.

  12. Analysis and Modeling of soil hydrology under different soil additives in artificial runoff plots

    NASA Astrophysics Data System (ADS)

    Ruidisch, M.; Arnhold, S.; Kettering, J.; Huwe, B.; Kuzyakov, Y.; Ok, Y.; Tenhunen, J. D.

    2009-12-01

    The impact of monsoon events during June and July in the Korean project region Haean Basin, which is located in the northeastern part of South Korea plays a key role for erosion, leaching and groundwater pollution risk by agrochemicals. Therefore, the project investigates the main hydrological processes in agricultural soils under field and laboratory conditions on different scales (plot, hillslope and catchment). Soil hydrological parameters were analysed depending on different soil additives, which are known for prevention of soil erosion and nutrient loss as well as increasing of water infiltration, aggregate stability and soil fertility. Hence, synthetic water-soluble Polyacrylamides (PAM), Biochar (Black Carbon mixed with organic fertilizer), both PAM and Biochar were applied in runoff plots at three agricultural field sites. Additionally, as control a subplot was set up without any additives. The field sites were selected in areas with similar hillslope gradients and with emphasis on the dominant land management form of dryland farming in Haean, which is characterised by row planting and row covering by foil. Hydrological parameters like satured water conductivity, matrix potential and water content were analysed by infiltration experiments, continuous tensiometer measurements, time domain reflectometry as well as pressure plates to indentify characteristic water retention curves of each horizon. Weather data were observed by three weather stations next to the runoff plots. Measured data also provide the input data for modeling water transport in the unsatured zone in runoff plots with HYDRUS 1D/2D/3D and SWAT (Soil & Water Assessment Tool).

  13. Guarana provides additional stimulation over caffeine alone in the planarian model.

    PubMed

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R; Constable, Mic Andre; Mulligan, Margaret E; Voura, Evelyn B

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065

  14. Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model

    PubMed Central

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R.; Constable, Mic Andre; Mulligan, Margaret E.; Voura, Evelyn B.

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065

  15. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models.

    PubMed

    Fan, Jianqing; Feng, Yang; Song, Rui

    2011-06-01

    A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods. PMID:22279246

  16. Creep damage in a localized load sharing fibre bundle model with additional ageing

    NASA Astrophysics Data System (ADS)

    Lennartz-Sassinek, Sabine; Danku, Zsuzsa; Main, Ian; Kun, Ferenc

    2013-04-01

    Many fields of science are interested in the damage growth in earth materials. Often the damage propagates not in big avalanches like the crack growth measured by acoustic emissions. Also "silent" damage may occur whose emissions are either to small to be detected or mix with back ground noise. These silent emissions may carry the majority of the over all damage in a system until failure. One famous model for damage growth is the fibre bundle model. Here we consider an extended version of a localized load sharing fibre bundle model which incorporates additional time dependent ageing of each fibre motivated by a chemically active environment. We present the non-trivial time dependent damage growth in this model in the low load limit representing creep damage far away from failure. We show both numerical simulations and analytical equations describing the damage rate of silent events and the corresponding amount of triggered "acoustic" damage. The analytical description is in agreement with the numerical results.

  17. Model Scramjet Inlet Unstart Induced by Mass Addition and Heat Release

    NASA Astrophysics Data System (ADS)

    Im, Seong-Kyun; Baccarella, Damiano; McGann, Brendan; Liu, Qili; Wermer, Lydiy; Do, Hyungrok

    2015-11-01

    The inlet unstart phenomena in a model scramjet are investigated at an arc-heated hypersonic wind tunnel. The unstart induced by nitrogen or ethylene jets at low or high enthalpy Mach 4.5 freestream flow conditions are compared. The jet injection pressurizes the downstream flow by mass addition and flow blockage. In case of the ethylene jet injection, heat release from combustion increases the backpressure further. Time-resolved schlieren imaging is performed at the jet and the lip of the model inlet to visualize the flow features during unstart. High frequency pressure measurements are used to provide information on pressure fluctuation at the scramjet wall. In both of the mass and heat release driven unstart cases, it is observed that there are similar flow transient and quasi-steady behaviors of unstart shockwave system during the unstart processes. Combustion driven unstart induces severe oscillatory flow motions of the jet and the unstart shock at the lip of the scramjet inlet after the completion of the unstart process, while the unstarted flow induced by solely mass addition remains relatively steady. The discrepancies between the processes of mass and heat release driven unstart are explained by flow choking mechanism.

  18. Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling.

    PubMed

    Ngwira, Alfred; Stanley, Christopher C

    2015-01-01

    Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than average or average and higher) with district as a spatial effect using the 2010 Malawi demographic and health survey data was adopted. A Gaussian model for birth weight in kilograms and a binary logistic model for the binary outcome (size of child at birth) were fitted. Continuous covariates were modelled by the penalized (p) splines and spatial effects were smoothed by the two dimensional p-spline. The study found that child birth order, mother weight and height are significant predictors of birth weight. Secondary education for mother, birth order categories 2-3 and 4-5, wealth index of richer family and mother height were significant predictors of child size at birth. The area associated with low birth weight was Chitipa and areas with increased risk to less than average size at birth were Chitipa and Mchinji. The study found support for the flexible modelling of some covariates that clearly have nonlinear influences. Nevertheless there is no strong support for inclusion of geographical spatial analysis. The spatial patterns though point to the influence of omitted variables with some spatial structure or possibly epidemiological processes that account for this spatial structure and the maps generated could be used for targeting development efforts at a glance. PMID:26114866

  19. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    SciTech Connect

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  20. Inter event times of fluid induced earthquakes suggest their Poisson nature

    NASA Astrophysics Data System (ADS)

    Langenbruch, C.; Dinske, C.; Shapiro, S. A.

    2011-11-01

    We analyze the inter event time distribution of fluid-injection-induced earthquakes for six catalogs collected at geothermal injection sites at Soultz-sous-Forêts and Basel. We find that the distribution of waiting times during phases of constant seismicity rate coincides with the exponential distribution of the homogeneous Poisson process (HPP). We analyze the waiting times for the complete event catalogs and find that, as for naturally occurring earthquakes, injection induced earthquakes are distributed according to a non homogeneous Poisson process in time. Moreover, the process of event occurrence in the injection volume domain is a HPP. These results indicate that fluid-injection-induced earthquakes are directly triggered by the loading induced by the fluid injection. We also consider the spatial distance between events and perform a nearest neighbor analysis in the time-space-magnitude domain. Our analysis including a comparison to a synthetic catalog created according to the ETAS model reveals no signs of causal relationships between events. Therefore, coupling effects between events are very weak. The Poisson model seems to be a very good approximation of fluid induced seismicity.

  1. Shaping the Arago-Poisson spot with incomplete spiral phase modulation.

    PubMed

    Zhang, Yuanying; Zhang, Wuhong; Su, Ming; Chen, Lixiang

    2016-04-01

    The Arago-Poisson spot played an important role in the discovery of the wave nature of light. We demonstrate a novel way to shape the Arago-Poisson spot by partially twisting the phase fronts of the incident light beam. We use a spatial light modulator to generate the holographic gratings both for mimicking the circular opaque objects and for modulating the spiral phase profiles. For incomplete spiral phase of five- and tenfold symmetry, we observe the gradual formation of the on-axis bright spots upon propagation. Our results show that two fundamental but seemingly independent optical phenomena, namely, the Arago-Poisson spot and the orbital angular momentum (OAM) of light, can be well connected by changing the phase height ϑ gradually from 0 to 2π. The experimental results are well interpreted visually by plotting the Poynting vector flows. In addition, based on the decomposed OAM spectra, the observations can also be understood from the controllable mixture of a fundamental Gaussian beam and an OAM beam. Our work is an elegant demonstration that spiral phase modulation can add to the optical tool to effectively shape the diffraction of light and may have potential applications in the field of optical manipulations. PMID:27140766

  2. The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle

    SciTech Connect

    Lee, Chiun-Chang

    2014-05-15

    The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem. Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.

  3. Modeling particulate matter concentrations measured through mobile monitoring in a deletion/substitution/addition approach

    NASA Astrophysics Data System (ADS)

    Su, Jason G.; Hopke, Philip K.; Tian, Yilin; Baldwin, Nichole; Thurston, Sally W.; Evans, Kristin; Rich, David Q.

    2015-12-01

    Land use regression modeling (LUR) through local scale circular modeling domains has been used to predict traffic-related air pollution such as nitrogen oxides (NOX). LUR modeling for fine particulate matters (PM), which generally have smaller spatial gradients than NOX, has been typically applied for studies involving multiple study regions. To increase the spatial coverage for fine PM and key constituent concentrations, we designed a mobile monitoring network in Monroe County, New York to measure pollutant concentrations of black carbon (BC, wavelength at 880 nm), ultraviolet black carbon (UVBC, wavelength at 3700 nm) and Delta-C (the difference between the UVBC and BC concentrations) using the Clarkson University Mobile Air Pollution Monitoring Laboratory (MAPL). A Deletion/Substitution/Addition (D/S/A) algorithm was conducted, which used circular buffers as a basis for statistics. The algorithm maximizes the prediction accuracy for locations without measurements using the V-fold cross-validation technique, and it reduces overfitting compared to other approaches. We found that the D/S/A LUR modeling approach could achieve good results, with prediction powers of 60%, 63%, and 61%, respectively, for BC, UVBC, and Delta-C. The advantage of mobile monitoring is that it can monitor pollutant concentrations at hundreds of spatial points in a region, rather than the typical less than 100 points from a fixed site saturation monitoring network. This research indicates that a mobile saturation sampling network, when combined with proper modeling techniques, can uncover small area variations (e.g., 10 m) in particulate matter concentrations.

  4. The biobehavioral family model: testing social support as an additional exogenous variable.

    PubMed

    Woods, Sarah B; Priest, Jacob B; Roush, Tara

    2014-12-01

    This study tests the inclusion of social support as a distinct exogenous variable in the Biobehavioral Family Model (BBFM). The BBFM is a biopsychosocial approach to health that proposes that biobehavioral reactivity (anxiety and depression) mediates the relationship between family emotional climate and disease activity. Data for this study included married, English-speaking adult participants (n = 1,321; 55% female; M age = 45.2 years) from the National Comorbidity Survey Replication, a nationally representative epidemiological study of the frequency of mental disorders in the United States. Participants reported their demographics, marital functioning, social support from friends and relatives, anxiety and depression (biobehavioral reactivity), number of chronic health conditions, and number of prescription medications. Confirmatory factor analyses supported the items used in the measures of negative marital interactions, social support, and biobehavioral reactivity, as well as the use of negative marital interactions, friends' social support, and relatives' social support as distinct factors in the model. Structural equation modeling indicated a good fit of the data to the hypothesized model (χ(2)  = 846.04, p = .000, SRMR = .039, CFI = .924, TLI = .914, RMSEA = .043). Negative marital interactions predicted biobehavioral reactivity (β = .38, p < .001), as did relatives' social support, inversely (β = -.16, p < .001). Biobehavioral reactivity predicted disease activity (β = .40, p < .001) and was demonstrated to be a significant mediator through tests of indirect effects. Findings are consistent with previous tests of the BBFM with adult samples, and suggest the important addition of family social support as a predicting factor in the model. PMID:24981970

  5. A habitat suitability model for Chinese sturgeon determined using the generalized additive method

    NASA Astrophysics Data System (ADS)

    Yi, Yujun; Sun, Jie; Zhang, Shanghong

    2016-03-01

    The Chinese sturgeon is a type of large anadromous fish that migrates between the ocean and rivers. Because of the construction of dams, this sturgeon's migration path has been cut off, and this species currently is on the verge of extinction. Simulating suitable environmental conditions for spawning followed by repairing or rebuilding its spawning grounds are effective ways to protect this species. Various habitat suitability models based on expert knowledge have been used to evaluate the suitability of spawning habitat. In this study, a two-dimensional hydraulic simulation is used to inform a habitat suitability model based on the generalized additive method (GAM). The GAM is based on real data. The values of water depth and velocity are calculated first via the hydrodynamic model and later applied in the GAM. The final habitat suitability model is validated using the catch per unit effort (CPUEd) data of 1999 and 2003. The model results show that a velocity of 1.06-1.56 m/s and a depth of 13.33-20.33 m are highly suitable ranges for the Chinese sturgeon to spawn. The hydraulic habitat suitability indexes (HHSI) for seven discharges (4000; 9000; 12,000; 16,000; 20,000; 30,000; and 40,000 m3/s) are calculated to evaluate integrated habitat suitability. The results show that the integrated habitat suitability reaches its highest value at a discharge of 16,000 m3/s. This study is the first to apply a GAM to evaluate the suitability of spawning grounds for the Chinese sturgeon. The study provides a reference for the identification of potential spawning grounds in the entire basin.

  6. Generalized additive models used to predict species abundance in the Gulf of Mexico: an ecosystem modeling tool.

    PubMed

    Drexler, Michael; Ainsworth, Cameron H

    2013-01-01

    Spatially explicit ecosystem models of all types require an initial allocation of biomass, often in areas where fisheries independent abundance estimates do not exist. A generalized additive modelling (GAM) approach is used to describe the abundance of 40 species groups (i.e. functional groups) across the Gulf of Mexico (GoM) using a large fisheries independent data set (SEAMAP) and climate scale oceanographic conditions. Predictor variables included in the model are chlorophyll a, sediment type, dissolved oxygen, temperature, and depth. Despite the presence of a large number of zeros in the data, a single GAM using a negative binomial distribution was suitable to make predictions of abundance for multiple functional groups. We present an example case study using pink shrimp (Farfantepenaeus duroarum) and compare the results to known distributions. The model successfully predicts the known areas of high abundance in the GoM, including those areas where no data was inputted into the model fitting. Overall, the model reliably captures areas of high and low abundance for the large majority of functional groups observed in SEAMAP. The result of this method allows for the objective setting of spatial distributions for numerous functional groups across a modeling domain, even where abundance data may not exist. PMID:23691223

  7. Generalized Additive Models Used to Predict Species Abundance in the Gulf of Mexico: An Ecosystem Modeling Tool

    PubMed Central

    Drexler, Michael; Ainsworth, Cameron H.

    2013-01-01

    Spatially explicit ecosystem models of all types require an initial allocation of biomass, often in areas where fisheries independent abundance estimates do not exist. A generalized additive modelling (GAM) approach is used to describe the abundance of 40 species groups (i.e. functional groups) across the Gulf of Mexico (GoM) using a large fisheries independent data set (SEAMAP) and climate scale oceanographic conditions. Predictor variables included in the model are chlorophyll a, sediment type, dissolved oxygen, temperature, and depth. Despite the presence of a large number of zeros in the data, a single GAM using a negative binomial distribution was suitable to make predictions of abundance for multiple functional groups. We present an example case study using pink shrimp (Farfantepenaeus duroarum) and compare the results to known distributions. The model successfully predicts the known areas of high abundance in the GoM, including those areas where no data was inputted into the model fitting. Overall, the model reliably captures areas of high and low abundance for the large majority of functional groups observed in SEAMAP. The result of this method allows for the objective setting of spatial distributions for numerous functional groups across a modeling domain, even where abundance data may not exist. PMID:23691223

  8. Nonlinear feedback in a six-dimensional Lorenz Model: impact of an additional heating term

    NASA Astrophysics Data System (ADS)

    Shen, B.-W.

    2015-03-01

    In this study, a six-dimensional Lorenz model (6DLM) is derived, based on a recent study using a five-dimensional (5-D) Lorenz model (LM), in order to examine the impact of an additional mode and its accompanying heating term on solution stability. The new mode added to improve the representation of the steamfunction is referred to as a secondary streamfunction mode, while the two additional modes, that appear in both the 6DLM and 5DLM but not in the original LM, are referred to as secondary temperature modes. Two energy conservation relationships of the 6DLM are first derived in the dissipationless limit. The impact of three additional modes on solution stability is examined by comparing numerical solutions and ensemble Lyapunov exponents of the 6DLM and 5DLM as well as the original LM. For the onset of chaos, the critical value of the normalized Rayleigh number (rc) is determined to be 41.1. The critical value is larger than that in the 3DLM (rc ~ 24.74), but slightly smaller than the one in the 5DLM (rc ~ 42.9). A stability analysis and numerical experiments obtained using generalized LMs, with or without simplifications, suggest the following: (1) negative nonlinear feedback in association with the secondary temperature modes, as first identified using the 5DLM, plays a dominant role in providing feedback for improving the solution's stability of the 6DLM, (2) the additional heating term in association with the secondary streamfunction mode may destabilize the solution, and (3) overall feedback due to the secondary streamfunction mode is much smaller than the feedback due to the secondary temperature modes; therefore, the critical Rayleigh number of the 6DLM is comparable to that of the 5DLM. The 5DLM and 6DLM collectively suggest different roles for small-scale processes (i.e., stabilization vs. destabilization), consistent with the following statement by Lorenz (1972): If the flap of a butterfly's wings can be instrumental in generating a tornado, it can

  9. Nonlinear feedback in a six-dimensional Lorenz model: impact of an additional heating term

    NASA Astrophysics Data System (ADS)

    Shen, B.-W.

    2015-12-01

    In this study, a six-dimensional Lorenz model (6DLM) is derived, based on a recent study using a five-dimensional (5-D) Lorenz model (LM), in order to examine the impact of an additional mode and its accompanying heating term on solution stability. The new mode added to improve the representation of the streamfunction is referred to as a secondary streamfunction mode, while the two additional modes, which appear in both the 6DLM and 5DLM but not in the original LM, are referred to as secondary temperature modes. Two energy conservation relationships of the 6DLM are first derived in the dissipationless limit. The impact of three additional modes on solution stability is examined by comparing numerical solutions and ensemble Lyapunov exponents of the 6DLM and 5DLM as well as the original LM. For the onset of chaos, the critical value of the normalized Rayleigh number (rc) is determined to be 41.1. The critical value is larger than that in the 3DLM (rc ~ 24.74), but slightly smaller than the one in the 5DLM (rc ~ 42.9). A stability analysis and numerical experiments obtained using generalized LMs, with or without simplifications, suggest the following: (1) negative nonlinear feedback in association with the secondary temperature modes, as first identified using the 5DLM, plays a dominant role in providing feedback for improving the solution's stability of the 6DLM, (2) the additional heating term in association with the secondary streamfunction mode may destabilize the solution, and (3) overall feedback due to the secondary streamfunction mode is much smaller than the feedback due to the secondary temperature modes; therefore, the critical Rayleigh number of the 6DLM is comparable to that of the 5DLM. The 5DLM and 6DLM collectively suggest different roles for small-scale processes (i.e., stabilization vs. destabilization), consistent with the following statement by Lorenz (1972): "If the flap of a butterfly's wings can be instrumental in generating a tornado, it can

  10. Impact of an additional chronic BDNF reduction on learning performance in an Alzheimer mouse model

    PubMed Central

    Psotta, Laura; Rockahr, Carolin; Gruss, Michael; Kirches, Elmar; Braun, Katharina; Lessmann, Volkmar; Bock, Jörg; Endres, Thomas

    2015-01-01

    There is increasing evidence that brain-derived neurotrophic factor (BDNF) plays a crucial role in Alzheimer’s disease (AD) pathology. A number of studies demonstrated that AD patients exhibit reduced BDNF levels in the brain and the blood serum, and in addition, several animal-based studies indicated a potential protective effect of BDNF against Aβ-induced neurotoxicity. In order to further investigate the role of BDNF in the etiology of AD, we created a novel mouse model by crossing a well-established AD mouse model (APP/PS1) with a mouse exhibiting a chronic BDNF deficiency (BDNF+/−). This new triple transgenic mouse model enabled us to further analyze the role of BDNF in AD in vivo. We reasoned that in case BDNF has a protective effect against AD pathology, an AD-like phenotype in our new mouse model should occur earlier and/or in more severity than in the APP/PS1-mice. Indeed, the behavioral analysis revealed that the APP/PS1-BDNF+/−-mice show an earlier onset of learning impairments in a two-way active avoidance task in comparison to APP/PS1- and BDNF+/−-mice. However in the Morris water maze (MWM) test, we could not observe an overall aggrevated impairment in spatial learning and also short-term memory in an object recognition task remained intact in all tested mouse lines. In addition to the behavioral experiments, we analyzed the amyloid plaque pathology in the APP/PS1 and APP/PS1-BDNF+/−-mice and observed a comparable plaque density in the two genotypes. Moreover, our results revealed a higher plaque density in prefrontal cortical compared to hippocampal brain regions. Our data reveal that higher cognitive tasks requiring the recruitment of cortical networks appear to be more severely affected in our new mouse model than learning tasks requiring mainly sub-cortical networks. Furthermore, our observations of an accelerated impairment in active avoidance learning in APP/PS1-BDNF+/−-mice further supports the hypothesis that BDNF deficiency

  11. A spectral Poisson solver for kinetic plasma simulation

    NASA Astrophysics Data System (ADS)

    Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf

    2011-10-01

    Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.

  12. Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.

    PubMed

    Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N

    2016-08-10

    We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms. PMID:27377708

  13. A convergent 2D finite-difference scheme for the Dirac–Poisson system and the simulation of graphene

    SciTech Connect

    Brinkman, D.; Heitzinger, C.; Markowich, P.A.

    2014-01-15

    We present a convergent finite-difference scheme of second order in both space and time for the 2D electromagnetic Dirac equation. We apply this method in the self-consistent Dirac–Poisson system to the simulation of graphene. The model is justified for low energies, where the particles have wave vectors sufficiently close to the Dirac points. In particular, we demonstrate that our method can be used to calculate solutions of the Dirac–Poisson system where potentials act as beam splitters or Veselago lenses.

  14. Thermal Stability of Nanocrystalline Alloys by Solute Additions and A Thermodynamic Modeling

    NASA Astrophysics Data System (ADS)

    Saber, Mostafa

    and alpha → gamma phase transformation in Fe-Ni-Zr alloys. In addition to the experimental study of thermal stabilization of nanocrystalline Fe-Cr-Zr or Fe-Ni-Zr alloys, the thesis presented here developed a new predictive model, applicable to strongly segregating solutes, for thermodynamic stabilization of binary alloys. This model can serve as a benchmark for selecting solute and evaluating the possible contribution of stabilization. Following a regular solution model, both the chemical and elastic strain energy contributions are combined to obtain the mixing enthalpy. The total Gibbs free energy of mixing is then minimized with respect to simultaneous variations in the grain boundary volume fraction and the solute concentration in the grain boundary and the grain interior. The Lagrange multiplier method was used to obtained numerical solutions. Application are given for the temperature dependence of the grain size and the grain boundary solute excess for selected binary system where experimental results imply that thermodynamic stabilization could be operative. This thesis also extends the binary model to a new model for thermodynamic stabilization of ternary nanocrystalline alloys. It is applicable to strongly segregating size-misfit solutes and uses input data available in the literature. In a same manner as the binary model, this model is based on a regular solution approach such that the chemical and elastic strain energy contributions are incorporated into the mixing enthalpy DeltaHmix, and the mixing entropy DeltaSmix is obtained using the ideal solution approximation. The Gibbs mixing free energy Delta Gmix is then minimized with respect to simultaneous variations in grain growth and solute segregation parameters. The Lagrange multiplier method is similarly used to obtain numerical solutions for the minimum Delta Gmix. The temperature dependence of the nanocrystalline grain size and interfacial solute excess can be obtained for selected ternary systems. As

  15. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. PMID:27554145

  16. Distributional properties of the three-dimensional Poisson Delaunay cell

    SciTech Connect

    Muche, L.

    1996-07-01

    This paper gives distributional properties of geometrical characteristics of the Delaunay tessellation generated by a stationary Poisson point process in {Re}{sup 3}. The considerations are based on a well-known formula given by Miles which describes the size and shape of the {open_quotes}typical{close_quotes} three-dimensional Poisson Delaunay cell. The results are the probability density functions for its volume, the area, and the perimeter of one of its faces, the angle spanned in a face by two of its edges, and the length of an edge. These probability density functions are given in integral form. Formulas for higher moments of these characteristics are given explicitly.

  17. A Study of Poisson's Ratio in the Yield Region

    NASA Technical Reports Server (NTRS)

    Gerard, George; Wildhorn, Sorrel

    1952-01-01

    In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.

  18. Distributional properties of the three-dimensional Poisson Delaunay cell

    NASA Astrophysics Data System (ADS)

    Muche, Lutz

    1996-07-01

    This paper gives distributional properties of geometrical characteristics of the Delaunay tessellation generated by a stationary Poisson point process in ℝ3. The considerations are based on a well-known formula given by Miles which describes the size and shape of the "typical" three-dimensional Poisson Delaunay cell. The results are the probability density functions for its volume, the area, and the perimeter of one of its faces, the angle spanned in a face by two of its edges, and the length of an edge. These probability density functions are given in integral form. Formulas for higher moments of these characteristics are given explicitly.

  19. Design and tuning of standard additive model based fuzzy PID controllers for multivariable process systems.

    PubMed

    Harinath, Eranda; Mann, George K I

    2008-06-01

    This paper describes a design and two-level tuning method for fuzzy proportional-integral derivative (FPID) controllers for a multivariable process where the fuzzy inference uses the inference of standard additive model. The proposed method can be used for any n x n multi-input-multi-output process and guarantees closed-loop stability. In the two-level tuning scheme, the tuning follows two steps: low-level tuning followed by high-level tuning. The low-level tuning adjusts apparent linear gains, whereas the high-level tuning changes the nonlinearity in the normalized fuzzy output. In this paper, two types of FPID configurations are considered, and their performances are evaluated by using a real-time multizone temperature control problem having a 3 x 3 process system. PMID:18558531

  20. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically

  1. Estimation of the lag time in a subsequent monomer addition model for fibril elongation.

    PubMed

    Shoffner, Suzanne K; Schnell, Santiago

    2016-08-01

    Fibrillogenesis, the production or development of protein fibers, has been linked to protein folding diseases. The progress curve of fibrils or aggregates typically takes on a sigmoidal shape with a lag phase, a rapid growth phase, and a final plateau regime. The study of the lag phase and the estimation of its critical timescale provide insight into the factors regulating the fibrillation process. However, methods to estimate a quantitative expression for the lag time rely on empirical expressions, which cannot connect the lag time to kinetic parameters associated with the reaction mechanisms of protein fibrillation. Here we introduce an approach for the estimation of the lag time using the governing rate equations of the elementary reactions of a subsequent monomer addition model for protein fibrillation as a case study. We show that the lag time is given by the sum of the critical timescales for each fibril intermediate in the subsequent monomer addition mechanism and therefore reveals causal connectivity between intermediate species. Furthermore, we find that single-molecule assays of protein fibrillation can exhibit a lag phase without a nucleation process, while dyes and extrinsic fluorescent probe bulk assays of protein fibrillation do not exhibit an observable lag phase during template-dependent elongation. Our approach could be valuable for investigating the effects of intrinsic and extrinsic factors to the protein fibrillation reaction mechanism and provides physicochemical insights into parameters regulating the lag phase. PMID:27250246

  2. Supra-additive effects of tramadol and acetaminophen in a human pain model.

    PubMed

    Filitz, Jörg; Ihmsen, Harald; Günther, Werner; Tröster, Andreas; Schwilden, Helmut; Schüttler, Jürgen; Koppert, Wolfgang

    2008-06-01

    The combination of analgesic drugs with different pharmacological properties may show better efficacy with less side effects. Aim of this study was to examine the analgesic and antihyperalgesic properties of the weak opioid tramadol and the non-opioid acetaminophen, alone as well as in combination, in an experimental pain model in humans. After approval of the local Ethics Committee, 17 healthy volunteers were enrolled in this double-blind and placebo-controlled study in a cross-over design. Transcutaneous electrical stimulation at high current densities (29.6+/-16.2 mA) induced spontaneous acute pain (NRS=6 of 10) and distinct areas of hyperalgesia for painful mechanical stimuli (pinprick-hyperalgesia). Pain intensities as well as the extent of the areas of hyperalgesia were assessed before, during and 150 min after a 15 min lasting intravenous infusion of acetaminophen (650 mg), tramadol (75 mg), a combination of both (325 mg acetaminophen and 37.5mg tramadol), or saline 0.9%. Tramadol led to a maximum pain reduction of 11.7+/-4.2% with negligible antihyperalgesic properties. In contrast, acetaminophen led to a similar pain reduction (9.8+/-4.4%), but a sustained antihyperalgesic effect (34.5+/-14.0% reduction of hyperalgesic area). The combination of both analgesics at half doses led to a supra-additive pain reduction of 15.2+/-5.7% and an enhanced antihyperalgesic effect (41.1+/-14.3% reduction of hyperalgesic areas) as compared to single administration of acetaminophen. Our study provides first results on interactions of tramadol and acetaminophen on experimental pain and hyperalgesia in humans. Pharmacodynamic modeling combined with the isobolographic technique showed supra-additive effects of the combination of acetaminophen and tramadol concerning both, analgesia and antihyperalgesia. The results might act as a rationale for combining both analgesics. PMID:17709207

  3. Evaluation of the Performance of Smoothing Functions in Generalized Additive Models for Spatial Variation in Disease

    PubMed Central

    Siangphoe, Umaporn; Wheeler, David C.

    2015-01-01

    Generalized additive models (GAMs) with bivariate smoothing functions have been applied to estimate spatial variation in risk for many types of cancers. Only a handful of studies have evaluated the performance of smoothing functions applied in GAMs with regard to different geographical areas of elevated risk and different risk levels. This study evaluates the ability of different smoothing functions to detect overall spatial variation of risk and elevated risk in diverse geographical areas at various risk levels using a simulation study. We created five scenarios with different true risk area shapes (circle, triangle, linear) in a square study region. We applied four different smoothing functions in the GAMs, including two types of thin plate regression splines (TPRS) and two versions of locally weighted scatterplot smoothing (loess). We tested the null hypothesis of constant risk and detected areas of elevated risk using analysis of deviance with permutation methods and assessed the performance of the smoothing methods based on the spatial detection rate, sensitivity, accuracy, precision, power, and false-positive rate. The results showed that all methods had a higher sensitivity and a consistently moderate-to-high accuracy rate when the true disease risk was higher. The models generally performed better in detecting elevated risk areas than detecting overall spatial variation. One of the loess methods had the highest precision in detecting overall spatial variation across scenarios and outperformed the other methods in detecting a linear elevated risk area. The TPRS methods outperformed loess in detecting elevated risk in two circular areas. PMID:25983545

  4. Modeling and additive manufacturing of bio-inspired composites with tunable fracture mechanical properties.

    PubMed

    Dimas, Leon S; Buehler, Markus J

    2014-07-01

    Flaws, imperfections and cracks are ubiquitous in material systems and are commonly the catalysts of catastrophic material failure. As stresses and strains tend to concentrate around cracks and imperfections, structures tend to fail far before large regions of material have ever been subjected to significant loading. Therefore, a major challenge in material design is to engineer systems that perform on par with pristine structures despite the presence of imperfections. In this work we integrate knowledge of biological systems with computational modeling and state of the art additive manufacturing to synthesize advanced composites with tunable fracture mechanical properties. Supported by extensive mesoscale computer simulations, we demonstrate the design and manufacturing of composites that exhibit deformation mechanisms characteristic of pristine systems, featuring flaw-tolerant properties. We analyze the results by directly comparing strain fields for the synthesized composites, obtained through digital image correlation (DIC), and the computationally tested composites. Moreover, we plot Ashby diagrams for the range of simulated and experimental composites. Our findings show good agreement between simulation and experiment, confirming that the proposed mechanisms have a significant potential for vastly improving the fracture response of composite materials. We elucidate the role of stiffness ratio variations of composite constituents as an important feature in determining the composite properties. Moreover, our work validates the predictive ability of our models, presenting them as useful tools for guiding further material design. This work enables the tailored design and manufacturing of composites assembled from inferior building blocks, that obtain optimal combinations of stiffness and toughness. PMID:24700202

  5. Effects of Mn addition on dislocation loop formation in A533B and model alloys

    NASA Astrophysics Data System (ADS)

    Watanabe, H.; Masaki, S.; Masubuchi, S.; Yoshida, N.; Dohi, K.

    2013-08-01

    It is well known that the radiation hardening or embrittlement of pressure vessel steels is very sensitive to the contents of minor solutes. To study the effect of dislocation loop formation on radiation hardening in these steels, in situ observation using a high-voltage electron microscope was conducted for the reference pressure vessel steel JRQ and Fe-based model alloys containing Mn, Si, and Ni. In the Fe-based model alloys, the addition of Mn was most effective for increasing dislocation loop density at 290 °C. Based on the assumption that a di-interstitial was adopted as the nucleus for the formation of an interstitial loop, a binding energy of 0.22 eV was obtained for the interaction of a Mn atom and an interstitial. The formation of Mn clusters detected by three-dimensional atom probe and interstitial-type loops at room temperature clearly showed that the oversized Mn atoms migrate through an interstitial mechanism. The temperature and flux dependence of loop density in pressure vessel steels was very weak up to 290 °C. This suggests that interstitial atoms are deeply trapped by the radiation-induced solute clusters in pressure vessel steels.

  6. Mechanical properties of additively manufactured octagonal honeycombs.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-12-01

    Honeycomb structures have found numerous applications as structural and biomedical materials due to their favourable properties such as low weight, high stiffness, and porosity. Application of additive manufacturing and 3D printing techniques allows for manufacturing of honeycombs with arbitrary shape and wall thickness, opening the way for optimizing the mechanical and physical properties for specific applications. In this study, the mechanical properties of honeycomb structures with a new geometry, called octagonal honeycomb, were investigated using analytical, numerical, and experimental approaches. An additive manufacturing technique, namely fused deposition modelling, was used to fabricate the honeycomb from polylactic acid (PLA). The honeycombs structures were then mechanically tested under compression and the mechanical properties of the structures were determined. In addition, the Euler-Bernoulli and Timoshenko beam theories were used for deriving analytical relationships for elastic modulus, yield stress, Poisson's ratio, and buckling stress of this new design of honeycomb structures. Finite element models were also created to analyse the mechanical behaviour of the honeycombs computationally. The analytical solutions obtained using Timoshenko beam theory were close to computational results in terms of elastic modulus, Poisson's ratio and yield stress, especially for relative densities smaller than 25%. The analytical solutions based on the Timoshenko analytical solution and the computational results were in good agreement with experimental observations. Finally, the elastic properties of the proposed honeycomb structure were compared to those of other honeycomb structures such as square, triangular, hexagonal, mixed, diamond, and Kagome. The octagonal honeycomb showed yield stress and elastic modulus values very close to those of regular hexagonal honeycombs and lower than the other considered honeycombs. PMID:27612831

  7. Some applications of the fractional Poisson probability distribution

    SciTech Connect

    Laskin, Nick

    2009-11-15

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  8. On covariant Poisson brackets in classical field theory

    SciTech Connect

    Forger, Michael; Salles, Mário O.

    2015-10-15

    How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.

  9. Vectorized multigrid Poisson solver for the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Brandt, M. A.

    1984-01-01

    The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

  10. Subsonic Flow for the Multidimensional Euler-Poisson System

    NASA Astrophysics Data System (ADS)

    Bae, Myoungjean; Duan, Ben; Xie, Chunjing

    2016-04-01

    We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.

  11. 3D soft metamaterials with negative Poisson's ratio.

    PubMed

    Babaee, Sahab; Shim, Jongmin; Weaver, James C; Chen, Elizabeth R; Patel, Nikita; Bertoldi, Katia

    2013-09-25

    Buckling is exploited to design a new class of three-dimensional metamaterials with negative Poisson's ratio. A library of auxetic building blocks is identified and procedures are defined to guide their selection and assembly. The auxetic properties of these materials are demonstrated both through experiments and finite element simulations and exhibit excellent qualitative and quantitative agreement. PMID:23878067

  12. Negative poisson's ratio in single-layer black phosphorus.

    PubMed

    Jiang, Jin-Wu; Park, Harold S

    2014-01-01

    The Poisson's ratio is a fundamental mechanical property that relates the resulting lateral strain to applied axial strain. Although this value can theoretically be negative, it is positive for nearly all materials, though negative values have been observed in so-called auxetic structures. However, nearly all auxetic materials are bulk materials whose microstructure has been specifically engineered to generate a negative Poisson's ratio. Here we report using first-principles calculations the existence of a negative Poisson's ratio in a single-layer, two-dimensional material, black phosphorus. In contrast to engineered bulk auxetics, this behaviour is intrinsic for single-layer black phosphorus, and originates from its puckered structure, where the pucker can be regarded as a re-entrant structure that is comprised of two coupled orthogonal hinges. As a result of this atomic structure, a negative Poisson's ratio is observed in the out-of-plane direction under uniaxial deformation in the direction parallel to the pucker. PMID:25131569

  13. Void-containing materials with tailored Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Goussev, Olga A.; Richner, Peter; Rozman, Michael G.; Gusev, Andrei A.

    2000-10-01

    Assuming square, hexagonal, and random packed arrays of nonoverlapping identical parallel cylindrical voids dispersed in an aluminum matrix, we have calculated numerically the concentration dependence of the transverse Poisson's ratios. It was shown that the transverse Poisson's ratio of the hexagonal and random packed arrays approached 1 upon increasing the concentration of voids while the ratio of the square packed array along the principal continuation directions approached 0. Experimental measurements were carried out on rectangular aluminum bricks with identical cylindrical holes drilled in square and hexagonal packed arrays. Experimental results were in good agreement with numerical predictions. We then demonstrated, based on the numerical and experimental results, that by varying the spatial arrangement of the holes and their volume fraction, one can design and manufacture voided materials with a tailored Poisson's ratio between 0 and 1. In practice, those with a high Poisson's ratio, i.e., close to 1, can be used to amplify the lateral responses of the structures while those with a low one, i.e., close to 0, can largely attenuate the lateral responses and can therefore be used in situations where stringent lateral stability is needed.

  14. Structured additive regression modeling of age of menarche and menopause in a breast cancer screening program.

    PubMed

    Duarte, Elisa; de Sousa, Bruno; Cadarso-Suarez, Carmen; Rodrigues, Vitor; Kneib, Thomas

    2014-05-01

    Breast cancer risk is believed to be associated with several reproductive factors, such as early menarche and late menopause. This study is based on the registries of the first time a woman enters the screening program, and presents a spatio-temporal analysis of the variables age of menarche and age of menopause along with other reproductive and socioeconomic factors. The database was provided by the Portuguese Cancer League (LPCC), a private nonprofit organization dealing with multiple issues related to oncology of which the Breast Cancer Screening Program is one of its main activities. The registry consists of 259,652 records of women who entered the screening program for the first time between 1990 and 2007 (45-69-year age group). Structured Additive Regression (STAR) models were used to explore spatial and temporal correlations with a wide range of covariates. These models are flexible enough to deal with a variety of complex datasets, allowing us to reveal possible relationships among the variables considered in this study. The analysis shows that early menarche occurs in younger women and in municipalities located in the interior of central Portugal. Women living in inland municipalities register later ages for menopause, and those born in central Portugal after 1933 show a decreasing trend in the age of menopause. Younger ages of menarche and late menopause are observed in municipalities with a higher purchasing power index. The analysis performed in this study portrays the time evolution of the age of menarche and age of menopause and their spatial characterization, adding to the identification of factors that could be of the utmost importance in future breast cancer incidence research. PMID:24615881

  15. Parametric identification of crystals having a cubic lattice with negative Poisson's ratios

    NASA Astrophysics Data System (ADS)

    Erofeev, V. I.; Pavlov, I. S.

    2015-11-01

    A two-dimensional model of an anisotropic crystalline material with cubic symmetry is considered. This model consists of a square lattice of round rigid particles, each possessing two translational and one rotational degree of freedom. Differential equations that describe propagation of elastic and rotational waves in such a medium are derived. A relationship between three groups of parameters is found: second-order elastic constants, acoustic wave velocities, and microstructure parameters. Values of the microstructure parameters of the considered anisotropic material at which its Poisson's ratios become negative are found.

  16. Enhancement of colour stability of anthocyanins in model beverages by gum arabic addition.

    PubMed

    Chung, Cheryl; Rojanasasithara, Thananunt; Mutilangi, William; McClements, David Julian

    2016-06-15

    This study investigated the potential of gum arabic to improve the stability of anthocyanins that are used in commercial beverages as natural colourants. The degradation of purple carrot anthocyanin in model beverage systems (pH 3.0) containing L-ascorbic acid proceeded with a first-order reaction rate during storage (40 °C for 5 days in light). The addition of gum arabic (0.05-5.0%) significantly enhanced the colour stability of anthocyanin, with the most stable systems observed at intermediate levels (1.5%). A further increase in concentration (>1.5%) reduced its efficacy due to a change in the conformation of the gum arabic molecules that hindered their exposure to the anthocyanins. Fluorescence quenching measurements showed that the anthocyanin could have interacted with the glycoprotein fractions of the gum arabic through hydrogen bonding, resulting in enhanced stability. Overall, this study provides valuable information about enhancing the stability of anthocyanins in beverage systems using natural ingredients. PMID:26868542

  17. Statistical inference for the additive hazards model under outcome-dependent sampling

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo

    2015-01-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363

  18. Influence of the heterogeneous reaction HCL + HOCl on an ozone hole model with hydrocarbon additions

    SciTech Connect

    Elliott, S.; Cicerone, R.J.; Turco, R.P.

    1994-02-20

    Injection of ethane or propane has been suggested as a means for reducing ozone loss within the Antarctic vortex because alkanes can convert active chlorine radicals into hydrochloric acid. In kinetic models of vortex chemistry including as heterogeneous processes only the hydrolysis and HCl reactions of ClONO{sub 2} and N{sub 2}O{sub 5}, parts per billion by volume levels of the light alkanes counteract ozone depletion by sequestering chlorine atoms. Introduction of the surface reaction of HCl with HOCl causes ethane to deepen baseline ozone holes and generally works to impede any mitigation by hydrocarbons. The increased depletion occurs because HCl + HOCl can be driven by HO{sub x} radicals released during organic oxidation. Following initial hydrogen abstraction by chlorine, alkane breakdown leads to a net hydrochloric acid activation as the remaining hydrogen atoms enter the photochemical system. Lowering the rate constant for reactions of organic peroxy radicals with ClO to 10{sup {minus}13} cm{sup 3} molecule{sup {minus}1} s{sup {minus}1} does not alter results, and the major conclusions are insensitive to the timing of the ethane additions. Ignoring the organic peroxy radical plus ClO reactions entirely restores remediation capabilities by allowing HO{sub x} removal independent of HCl. Remediation also returns if early evaporation of polar stratospheric clouds leaves hydrogen atoms trapped in aldehyde intermediates, but real ozone losses are small in such cases. 95 refs., 4 figs., 7 tabs.

  19. In vivo characterization of two additional Leishmania donovani strains using the murine and hamster model.

    PubMed

    Kauffmann, F; Dumetz, F; Hendrickx, S; Muraille, E; Dujardin, J-C; Maes, L; Magez, S; De Trez, C

    2016-05-01

    Leishmania donovani is a protozoan parasite causing the neglected tropical disease visceral leishmaniasis. One difficulty to study the immunopathology upon L. donovani infection is the limited adaptability of the strains to experimental mammalian hosts. Our knowledge about L. donovani infections relies on a restricted number of East African strains (LV9, 1S). Isolated from patients in the 1960s, these strains were described extensively in mice and Syrian hamsters and have consequently become 'reference' laboratory strains. L. donovani strains from the Indian continent display distinct clinical features compared to East African strains. Some reports describing the in vivo immunopathology of strains from the Indian continent exist. This study comprises a comprehensive immunopathological characterization upon infection with two additional strains, the Ethiopian L. donovani L82 strain and the Nepalese L. donovani BPK282 strain in both Syrian hamsters and C57BL/6 mice. Parameters that include parasitaemia levels, weight loss, hepatosplenomegaly and alterations in cellular composition of the spleen and liver, showed that the L82 strain generated an overall more virulent infection compared to the BPK282 strain. Altogether, both L. donovani strains are suitable and interesting for subsequent in vivo investigation of visceral leishmaniasis in the Syrian hamster and the C57BL/6 mouse model. PMID:27012562

  20. Exact momentum conservation laws for the gyrokinetic Vlasov-Poisson equations

    SciTech Connect

    Brizard, Alain J.; Tronko, Natalia

    2011-08-15

    The exact momentum conservation laws for the nonlinear gyrokinetic Vlasov-Poisson equations are derived by applying the Noether method on the gyrokinetic variational principle [A. J. Brizard, Phys. Plasmas 7, 4816 (2000)]. From the gyrokinetic Noether canonical-momentum equation derived by the Noether method, the gyrokinetic parallel momentum equation and other gyrokinetic Vlasov-moment equations are obtained. In addition, an exact gyrokinetic toroidal angular-momentum conservation law is derived in axisymmetric tokamak geometry, where the transport of parallel-toroidal momentum is related to the radial gyrocenter polarization, which includes contributions from the guiding-center and gyrocenter transformations.

  1. Two-sample discrimination of Poisson means

    NASA Technical Reports Server (NTRS)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  2. Second-order Poisson-Nernst-Planck solver for ion transport

    NASA Astrophysics Data System (ADS)

    Zheng, Qiong; Chen, Duan; Wei, Guo-Wei

    2011-06-01

    The Poisson-Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second-order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are

  3. Second-order Poisson Nernst-Planck solver for ion channel transport.

    PubMed

    Zheng, Qiong; Chen, Duan; Wei, Guo-Wei

    2011-06-01

    The Poisson Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are

  4. Coagulation kinetics beyond mean field theory using an optimised Poisson representation

    SciTech Connect

    Burnett, James; Ford, Ian J.

    2015-05-21

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable “gauge” transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.

  5. Second-order Poisson Nernst-Planck solver for ion channel transport

    PubMed Central

    Zheng, Qiong; Chen, Duan; Wei, Guo-Wei

    2010-01-01

    The Poisson Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are

  6. Poisson structures for lifts and periodic reductions of integrable lattice equations

    NASA Astrophysics Data System (ADS)

    Kouloukas, Theodoros E.; Tran, Dinh T.

    2015-02-01

    We introduce and study suitable Poisson structures for four-dimensional maps derived as lifts and specific periodic reductions of integrable lattice equations. These maps are Poisson with respect to these structures and the corresponding integrals are in involution.

  7. A Legendre-Fourier spectral method with exact conservation laws for the Vlasov-Poisson system

    NASA Astrophysics Data System (ADS)

    Manzini, G.; Delzanno, G. L.; Vencels, J.; Markidis, S.

    2016-07-01

    We present the design and implementation of an L2-stable spectral method for the discretization of the Vlasov-Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank-Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations is iteratively solved at any time cycle by a Jacobian-Free Newton-Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre-Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.

  8. Reference manual for the POISSON/SUPERFISH Group of Codes

    SciTech Connect

    Not Available

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.

  9. Correlation between supercooled liquid relaxation and glass Poisson's ratio.

    PubMed

    Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng

    2015-10-28

    We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition. PMID:26520524

  10. Image deconvolution under Poisson noise using SURE-LET approach

    NASA Astrophysics Data System (ADS)

    Xue, Feng; Liu, Jiaqi; Meng, Gang; Yan, Jing; Zhao, Min

    2015-10-01

    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. By minimizing Stein's unbiased risk estimate (SURE), the SURE-LET method was firstly proposed to deal with Gaussian noise corruption. Our key contribution is to demonstrate that the SURE-LET algorithm is also applicable for Poisson noisy image and proposed an efficient algorithm. The formulation of SURE requires knowledge of Gaussian noise variance. We experimentally found a simple and direct link between the noise variance estimated by median absolute difference (MAD) method and the optimal one that leads to the best deconvolution performance in terms of mean squared error (MSE). Extensive experiments show that this optimal noise variance works satisfactorily for a wide range of natural images.

  11. Quantized Nambu-Poisson manifolds and n-Lie algebras

    SciTech Connect

    DeBellis, Joshua; Saemann, Christian; Szabo, Richard J.

    2010-12-15

    We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of R{sup n} by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.

  12. Correlation between supercooled liquid relaxation and glass Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng

    2015-10-01

    We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition.

  13. Invariants and labels for Lie-Poisson Systems

    SciTech Connect

    Thiffeault, J.L.; Morrison, P.J.

    1998-04-01

    Reduction is a process that uses symmetry to lower the order of a Hamiltonian system. The new variables in the reduced picture are often not canonical: there are no clear variables representing positions and momenta, and the Poisson bracket obtained is not of the canonical type. Specifically, we give two examples that give rise to brackets of the noncanonical Lie-Poisson form: the rigid body and the two-dimensional ideal fluid. From these simple cases, we then use the semidirect product extension of algebras to describe more complex physical systems. The Casimir invariants in these systems are examined, and some are shown to be linked to the recovery of information about the configuration of the system. We discuss a case in which the extension is not a semidirect product, namely compressible reduced MHD, and find for this case that the Casimir invariants lend partial information about the configuration of the system.

  14. Improved Poisson solver for cfa/magnetron simulation

    SciTech Connect

    Dombrowski, G.E.

    1996-12-31

    E{sub dc}, the static field of a device having vane-shaped anodes, has been determined by application of Hockney`s method, which in turn uses Buneman`s cyclic reduction. This result can be used for both cfa and magnetrons, but does not solve the general space-charge fields. As pointed out by Hockney, the matrix of coupling capacitive factors between the vane-defining mesh points can also be used to solve the Poisson equation for the entire cathode-anode domain. Space-charge fields of electrons between anode electrodes can now be determined. This technique also computes the Ramo function for the entire region. This method has been applied to the magnetron. Extension to the cfa with many different space-charge bunches does not appear to be practicable. Calculations for the type 4J50 magnetron by the various degrees of accuracy in solving the Poisson equation are compared with experimental measurements.

  15. Quantized Nambu-Poisson manifolds and n-Lie algebras

    NASA Astrophysics Data System (ADS)

    DeBellis, Joshua; Sämann, Christian; Szabo, Richard J.

    2010-12-01

    We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of {{R}}^n by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.

  16. Intrinsic Negative Poisson's Ratio for Single-Layer Graphene.

    PubMed

    Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming; Park, Harold S

    2016-08-10

    Negative Poisson's ratio (NPR) materials have drawn significant interest because the enhanced toughness, shear resistance, and vibration absorption that typically are seen in auxetic materials may enable a range of novel applications. In this work, we report that single-layer graphene exhibits an intrinsic NPR, which is robust and independent of its size and temperature. The NPR arises due to the interplay between two intrinsic deformation pathways (one with positive Poisson's ratio, the other with NPR), which correspond to the bond stretching and angle bending interactions in graphene. We propose an energy-based deformation pathway criteria, which predicts that the pathway with NPR has lower energy and thus becomes the dominant deformation mode when graphene is stretched by a strain above 6%, resulting in the NPR phenomenon. PMID:27408994

  17. A comparison between simulation and poisson-boltzmann fields

    NASA Astrophysics Data System (ADS)

    Pettitt, B. Montgomery; Valdeavella, C. V.

    1999-11-01

    The electrostatic potentials from molecular dynamics (MD) trajectories and Poisson-Boltzmann calculations on a tetra peptide are compared to understand the validity of the resulting free energy surface. The Tuftsin peptide with sequence, Thr-Lys-Pro-Arg, in water is used for the comparison. The results obtained from the analysis of the MD trajectories for the total electrostatic potential at points on a grid using the Ewald technique are compared with the solution to the Poisson-Boltzmann (PB) equation averaged over the same set of configurations. The latter was solved using an optimal set of dielectric constant parameters. Structural averaging of the field over the MD simulation was examined in the context of the PB results. The detailed spatial variation of the electrostatic potential on the molecular surface are not qualitatively reproducible from MD to PB. Implications of using such field calculations and the implied free energies are discussed.

  18. New method for blowup of the Euler-Poisson system

    NASA Astrophysics Data System (ADS)

    Kwong, Man Kam; Yuen, Manwai

    2016-08-01

    In this paper, we provide a new method for establishing the blowup of C2 solutions for the pressureless Euler-Poisson system with attractive forces for RN (N ≥ 2) with ρ(0, x0) > 0 and Ω 0 i j ( x 0 ) = /1 2 [" separators=" ∂ i u j ( 0 , x 0 ) - ∂ j u i ( 0 , x 0 ) ] = 0 at some point x0 ∈ RN. By applying the generalized Hubble transformation div u ( t , x 0 ( t ) ) = /N a ˙ ( t ) a ( t ) to a reduced Riccati differential inequality derived from the system, we simplify the inequality into the Emden equation a ̈ ( t ) = - /λ a ( t ) N - 1 , a ( 0 ) = 1 , a ˙ ( 0 ) = /div u ( 0 , x 0 ) N . Known results on its blowup set allow us to easily obtain the blowup conditions of the Euler-Poisson system.

  19. Tensorial Basis Spline Collocation Method for Poisson's Equation

    NASA Astrophysics Data System (ADS)

    Plagne, Laurent; Berthou, Jean-Yves

    2000-01-01

    This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.

  20. Filling of a Poisson trap by a population of random intermittent searchers.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2012-03-01

    We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→∞, in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f(n)(t). The latter is determined by the integrated Poisson rate μ(t)=∫(0)(t)λ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical

  1. Superposition of many independent spike trains is generally not a Poisson process

    NASA Astrophysics Data System (ADS)

    Lindner, Benjamin

    2006-02-01

    We study the sum of many independent spike trains and ask whether the resulting spike train has Poisson statistics or not. It is shown that for a non-Poissonian statistics of the single spike train, the resulting sum of spikes has exponential interspike interval (ISI) distributions, vanishing the ISI correlation at a finite lag but exhibits exactly the same power spectrum as the original spike train does. This paradox is resolved by considering what happens to ISI correlations in the limit of an infinite number of superposed trains. Implications of our findings for stochastic models in the neurosciences are briefly discussed.

  2. Factor analysis models for structuring covariance matrices of additive genetic effects: a Bayesian implementation

    PubMed Central

    de los Campos, Gustavo; Gianola, Daniel

    2007-01-01

    Multivariate linear models are increasingly important in quantitative genetics. In high dimensional specifications, factor analysis (FA) may provide an avenue for structuring (co)variance matrices, thus reducing the number of parameters needed for describing (co)dispersion. We describe how FA can be used to model genetic effects in the context of a multivariate linear mixed model. An orthogonal common factor structure is used to model genetic effects under Gaussian assumption, so that the marginal likelihood is multivariate normal with a structured genetic (co)variance matrix. Under standard prior assumptions, all fully conditional distributions have closed form, and samples from the joint posterior distribution can be obtained via Gibbs sampling. The model and the algorithm developed for its Bayesian implementation were used to describe five repeated records of milk yield in dairy cattle, and a one common FA model was compared with a standard multiple trait model. The Bayesian Information Criterion favored the FA model. PMID:17897592

  3. Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code

    SciTech Connect

    Bowman, Kimiko o; Shenton, LR

    2006-01-01

    The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.

  4. Poisson reduction for nonholonomic mechanical systems with symmetry

    NASA Astrophysics Data System (ADS)

    Wang Sang Koon; Marsden, Jerrold E.

    1998-10-01

    This paper continues the work of Koon and Marsden [10] that began the comparison of the Hamiltonian and Lagrangian formulations of nonholonomic systems. Because of the necessary replacement of conservation laws with the momentum equation, it is natural to let the value of momentum be a variable and for this reason it is natural to take a Poisson viewpoint. Some of this theory has been started in van der Schaft and Maschke [24]. We build on their work, further develop the theory of nonholonomic Poisson reduction, and tie this theory to other work in the area. We use this reduction procedure to organize nonholonomic dynamics into a reconstruction equation, a nonholonomic momentum equation and the reduced Lagrange-d'Alembert equations in Hamiltonian form. We also show that these equations are equivalent to those given by the Lagrangian reduction methods of Bloch, Krishnaprasad, Marsden and Murray [4]. Because of the results of Koon and Marsden [10], this is also equivalent to the results of Bates and Śniatycki [2], obtained by nonholonomic symplectic reduction. Two interesting complications make this effort especially interesting. First of all, as we have mentioned, symmetry need not lead to conservation laws but rather to a momentum equation. Second, the natural Poisson bracket fails to satisfy the Jacobi identity. In fact, the so-called Jacobiizer (the cyclic sum that vanishes when the Jacobi identity holds), or equivalently, the Schouten bracket, is an interesting expression involving the curvature of the underlying distribution describing the nonholonomic constraints. The Poisson reduction results in this paper are important for the future development of the stability theory for nonholonomic mechanical systems with symmetry, as begun by Zenkov, Bloch and Marsden [25]. In particular, they should be useful for the development of the powerful block diagonalization properties of the energy-momentum method developed by Simo, Lewis and Marsden [23].

  5. Poisson Parameters of Antimicrobial Activity: A Quantitative Structure-Activity Approach

    PubMed Central

    Sestraş, Radu E.; Jäntschi, Lorentz; Bolboacă, Sorana D.

    2012-01-01

    A contingency of observed antimicrobial activities measured for several compounds vs. a series of bacteria was analyzed. A factor analysis revealed the existence of a certain probability distribution function of the antimicrobial activity. A quantitative structure-activity relationship analysis for the overall antimicrobial ability was conducted using the population statistics associated with identified probability distribution function. The antimicrobial activity proved to follow the Poisson distribution if just one factor varies (such as chemical compound or bacteria). The Poisson parameter estimating antimicrobial effect, giving both mean and variance of the antimicrobial activity, was used to develop structure-activity models describing the effect of compounds on bacteria and fungi species. Two approaches were employed to obtain the models, and for every approach, a model was selected, further investigated and found to be statistically significant. The best predictive model for antimicrobial effect on bacteria and fungi species was identified using graphical representation of observed vs. calculated values as well as several predictive power parameters. PMID:22606039

  6. Implementation of the Euler-Lagrange and poisson equations to extract one connected region

    NASA Astrophysics Data System (ADS)

    Bowden, A.; Todorov, M. D.; Sirakov, N. M.

    2014-11-01

    This paper presents a numerical method that evolves an active contour toward the boundary of one connected image region. The numerical method implements a model which uses the solution of the Euler-Lagrange Differential Equation in order to minimize a "snake" functional. This functional is constructed as an integral of the so called internal and external energies. The internal energy is related to the moving contour. The external energy represents the image. The minimum of the functional falls on the boundaries of objects placed in the image. A half step numerical scheme implements the concepts. The contributions of the new model come from the use of the solution of the Poisson equation and development of a new penalty function to halt the contour on the object's boundary. In order to make the contour move across homogeneous regions and enlarge the capture range, we solve the Poisson equation with Dirichlet boundary conditions and generate a gradient vector field of the image. The numerical method is implemented with MatLab and uses a stop condition based on the gradient. The advantages of the model are that it has a large capture range, is accurate in detecting the boundaries of image objects, and is capable of surpassing noise. A disadvantage is that the user has to select the right values of three parameters. Several experiments with synthetic, weapon, and medical images have been conducted to validate the model. Our work continues with the goal of cutting the curve in the event multiple objects have been enveloped.

  7. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  8. Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Wang, Jun; Luo, Ray

    2009-01-01

    CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

  9. Assessment of linear finite-difference Poisson-Boltzmann solvers.

    PubMed

    Wang, Jun; Luo, Ray

    2010-06-01

    CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study, we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

  10. Poisson-like spiking in circuits with probabilistic synapses.

    PubMed

    Moreno-Bote, Rubén

    2014-07-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  11. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  12. Novel negative Poisson's ratio behavior induced by an elastic instability

    NASA Astrophysics Data System (ADS)

    Bertoldi, Katia; Reis, Pedro; Willshaw, Stephen; Mullin, Tom

    2010-03-01

    When materials are compressed along a particular axis they are most commonly observed to expand in directions orthogonal to the applied load. The property that characterizes this behavior is the Poisson's ratio which is defined as the ratio between the negative transverse and longitudinal strains. Materials with a negative Poisson's ratio will contract in the transverse direction when compressed and demonstration of practical examples is relatively recent. A significant challenge in the fabrication of auxetic materials is that it usually involves embedding structures with intricate geometries within a host matrix. As such, the manufacturing process has been a bottleneck in the practical development towards applications. Here we exploit elastic instabilities to create novel effects within materials with periodic microstructure and we show that they may lead to negative Poisson's ratio behavior for the 2D periodic structures i.e. it only occurs under compression. The uncomplicated manufacturing process of the samples together with the robustness of the observed phenomena suggests that this may form the basis of a practical method for constructing planar auxetic materials over a wide range of length-scales.

  13. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy. PMID:20840902

  14. Multilevel Methods for the Poisson-Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Holst, Michael Jay

    We consider the numerical solution of the Poisson -Boltzmann equation (PBE), a three-dimensional second order nonlinear elliptic partial differential equation arising in biophysics. This problem has several interesting features impacting numerical algorithms, including discontinuous coefficients representing material interfaces, rapid nonlinearities, and three spatial dimensions. Similar equations occur in various applications, including nuclear physics, semiconductor physics, population genetics, astrophysics, and combustion. In this thesis, we study the PBE, discretizations, and develop multilevel-based methods for approximating the solutions of these types of equations. We first outline the physical model and derive the PBE, which describes the electrostatic potential of a large complex biomolecule lying in a solvent. We next study the theoretical properties of the linearized and nonlinear PBE using standard function space methods; since this equation has not been previously studied theoretically, we provide existence and uniqueness proofs in both the linearized and nonlinear cases. We also analyze box-method discretizations of the PBE, establishing several properties of the discrete equations which are produced. In particular, we show that the discrete nonlinear problem is well-posed. We study and develop linear multilevel methods for interface problems, based on algebraic enforcement of Galerkin or variational conditions, and on coefficient averaging procedures. Using a stencil calculus, we show that in certain simplified cases the two approaches are equivalent, with different averaging procedures corresponding to different prolongation operators. We also develop methods for nonlinear problems based on a nonlinear multilevel method, and on linear multilevel methods combined with a globally convergent damped-inexact-Newton method. We derive a necessary and sufficient descent condition for the inexact-Newton direction, enabling the development of extremely

  15. Adaptation of the pore diffusion model to describe multi-addition batch uptake high-throughput screening experiments.

    PubMed

    Traylor, Steven J; Xu, Xuankuo; Li, Yi; Jin, Mi; Li, Zheng Jian

    2014-11-14

    Equilibrium isotherm and kinetic mass transfer measurements are critical to mechanistic modeling of binding and elution behavior within a chromatographic column. However, traditional methods of measuring these parameters are impractically time- and labor-intensive. While advances in high-throughput robotic liquid handling systems have created time and labor-saving methods of performing kinetic and equilibrium measurements of proteins on chromatographic resins in a 96-well plate format, these techniques continue to be limited by physical constraints on protein addition, incubation and separation times; the available concentration of protein stocks and process pools; and practical constraints on resin and fluid volumes in the 96-well format. In this study, a novel technique for measuring protein uptake kinetics (multi-addition batch uptake) has been developed to address some of these limitations during high-throughput batch uptake kinetic measurements. This technique uses sequential additions of protein stock to chromatographic resin in a 96-well plate and the subsequent removal of each addition by centrifugation or vacuum separation. The pore diffusion model was adapted here to model multi-addition batch uptake and was tested and compared with traditional batch uptake measurements of uptake of an Fc-fusion protein on an anion exchange resin. Acceptable agreement between the two techniques is achieved for the two solution conditions investigated here. In addition, a sensitivity analysis of the model to the physical inputs is presented and the advantages and limitations of the multi-addition batch uptake technique are explored. PMID:25311484

  16. Bias reduction for low-statistics PET: maximum likelihood reconstruction with a modified Poisson distribution.

    PubMed

    Van Slambrouck, Katrien; Stute, Simon; Comtat, Claude; Sibomana, Merence; van Velden, Floris H P; Boellaard, Ronald; Nuyts, Johan

    2015-01-01

    Positron emission tomography data are typically reconstructed with maximum likelihood expectation maximization (MLEM). However, MLEM suffers from positive bias due to the non-negativity constraint. This is particularly problematic for tracer kinetic modeling. Two reconstruction methods with bias reduction properties that do not use strict Poisson optimization are presented and compared to each other, to filtered backprojection (FBP), and to MLEM. The first method is an extension of NEGML, where the Poisson distribution is replaced by a Gaussian distribution for low count data points. The transition point between the Gaussian and the Poisson regime is a parameter of the model. The second method is a simplification of ABML. ABML has a lower and upper bound for the reconstructed image whereas AML has the upper bound set to infinity. AML uses a negative lower bound to obtain bias reduction properties. Different choices of the lower bound are studied. The parameter of both algorithms determines the effectiveness of the bias reduction and should be chosen large enough to ensure bias-free images. This means that both algorithms become more similar to least squares algorithms, which turned out to be necessary to obtain bias-free reconstructions. This comes at the cost of increased variance. Nevertheless, NEGML and AML have lower variance than FBP. Furthermore, randoms handling has a large influence on the bias. Reconstruction with smoothed randoms results in lower bias compared to reconstruction with unsmoothed randoms or randoms precorrected data. However, NEGML and AML yield both bias-free images for large values of their parameter. PMID:25137726

  17. 78 FR 32224 - Availability of Version 3.1.2 of the Connect America Fund Phase II Cost Model; Additional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ... as part of the Model Design PN, 77 FR 38804, June 29, 2012, of the possible significant economic...). See Electronic Filing of Documents in Rulemaking Proceedings, 63 FR 24121, May 1, 1998. Electronic...; Additional Discussion Topics in Connect America Cost Model Virtual Workshop AGENCY: Federal...

  18. Inclusion of Additional Plant Species and Trait Information in Dynamic Vegetation Modeling of Arctic Tundra and Boreal Forest Ecosystem

    NASA Astrophysics Data System (ADS)

    Euskirchen, E. S.; Patil, V.; Roach, J.; Griffith, B.; McGuire, A. D.

    2015-12-01

    Dynamic vegetation models (DVMs) have been developed to model the ecophysiological characteristics of plant functional types in terrestrial ecosystems. They have frequently been used to answer questions pertaining to processes such as disturbance, plant succession, and community composition under historical and future climate scenarios. While DVMs have proved useful in these types of applications, it has often been questioned if additional detail, such as including plant dynamics at the species-level and/or including species-specific traits would make these models more accurate and/or broadly applicable. A sub-question associated with this issue is, 'How many species, or what degree of functional diversity, should we incorporate to sustain ecosystem function in modeled ecosystems?' Here, we focus on how the inclusion of additional plant species and trait information may strengthen dynamic vegetation modeling in applications pertaining to: (1) forage for caribou in northern Alaska, (2) above- and belowground carbon storage in the boreal forest and lake margin wetlands of interior Alaska, and (3) arctic tundra and boreal forest leaf phenology. While the inclusion of additional information generally proved valuable in these three applications, this additional detail depends on field data that may not always be available and may also result in increased computational complexity. Therefore, it is important to assess these possible limitations against the perceived need for additional plant species and trait information in the development and application of dynamic vegetation models.

  19. Mechanisms and modeling of the effects of additives on the nitrogen oxides emission

    NASA Technical Reports Server (NTRS)

    Kundu, Krishna P.; Nguyen, Hung Lee; Kang, M. Paul

    1991-01-01

    A theoretical study on the emission of the oxides of nitrogen in the combustion of hydrocarbons is presented. The current understanding of the mechanisms and the rate parameters for gas phase reactions were used to calculate the NO(x) emission. The possible effects of different chemical species on thermal NO(x), on a long time scale were discussed. The mixing of these additives at various stages of combustion were considered and NO(x) concentrations were calculated; effects of temperatures were also considered. The chemicals such as hydrocarbons, H2, CH3OH, NH3, and other nitrogen species were chosen as additives in this discussion. Results of these calculations can be used to evaluate the effects of these additives on the NO(x) emission in the industrial combustion system.

  20. Stochastic Processes as True-Score Models for Highly Speeded Mental Tests.

    ERIC Educational Resources Information Center

    Moore, William E.

    The previous theoretical development of the Poisson process as a strong model for the true-score theory of mental tests is discussed, and additional theoretical properties of the model from the standpoint of individual examinees are developed. The paper introduces the Erlang process as a family of test theory models and shows in the context of…

  1. Applications of MMPBSA to Membrane Proteins I: Efficient Numerical Solutions of Periodic Poisson-Boltzmann Equation

    PubMed Central

    Botello-Smith, Wesley M.; Luo, Ray

    2016-01-01

    Continuum solvent models have been widely used in biomolecular modeling applications. Recently much attention has been given to inclusion of implicit membrane into existing continuum Poisson-Boltzmann solvent models to extend their applications to membrane systems. Inclusion of an implicit membrane complicates numerical solutions of the underlining Poisson-Boltzmann equation due to the dielectric inhomogeneity on the boundary surfaces of a computation grid. This can be alleviated by the use of the periodic boundary condition, a common practice in electrostatic computations in particle simulations. The conjugate gradient and successive over-relaxation methods are relatively straightforward to be adapted to periodic calculations, but their convergence rates are quite low, limiting their applications to free energy simulations that require a large number of conformations to be processed. To accelerate convergence, the Incomplete Cholesky preconditioning and the geometric multi-grid methods have been extended to incorporate periodicity for biomolecular applications. Impressive convergence behaviors were found as in the previous applications of these numerical methods to tested biomolecules and MMPBSA calculations. PMID:26389966

  2. Applications of MMPBSA to Membrane Proteins I: Efficient Numerical Solutions of Periodic Poisson-Boltzmann Equation.

    PubMed

    Botello-Smith, Wesley M; Luo, Ray

    2015-10-26

    Continuum solvent models have been widely used in biomolecular modeling applications. Recently much attention has been given to inclusion of implicit membranes into existing continuum Poisson-Boltzmann solvent models to extend their applications to membrane systems. Inclusion of an implicit membrane complicates numerical solutions of the underlining Poisson-Boltzmann equation due to the dielectric inhomogeneity on the boundary surfaces of a computation grid. This can be alleviated by the use of the periodic boundary condition, a common practice in electrostatic computations in particle simulations. The conjugate gradient and successive over-relaxation methods are relatively straightforward to be adapted to periodic calculations, but their convergence rates are quite low, limiting their applications to free energy simulations that require a large number of conformations to be processed. To accelerate convergence, the Incomplete Cholesky preconditioning and the geometric multigrid methods have been extended to incorporate periodicity for biomolecular applications. Impressive convergence behaviors were found as in the previous applications of these numerical methods to tested biomolecules and MMPBSA calculations. PMID:26389966

  3. Creating a Climate for Linguistically Responsive Instruction: The Case for Additive Models

    ERIC Educational Resources Information Center

    Rao, Arthi B.; Morales, P. Zitlali

    2015-01-01

    As a state with a longstanding tradition of offering bilingual education, Illinois has a legislative requirement for native language instruction in earlier grades through a model called Transitional Bilingual Education (TBE). This model does not truly develop bilingualism, however, but rather offers native language instruction to English learners…

  4. 78 FR 12271 - Wireline Competition Bureau Seeks Additional Comment In Connect America Cost Model Virtual Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ... Design PN, 77 FR 38804, June 29, 2012, of the possible significant economic impact on a substantial... Documents in Rulemaking Proceedings, 63 FR 24121, May 1, 1998. Electronic Filers: Comments may be filed... document, the Wireline Competition Bureau seeks public input on additional questions relating to...

  5. Linear-Nonlinear-Poisson Models of Primate Choice Dynamics

    ERIC Educational Resources Information Center

    Corrado, Greg S.; Sugrue, Leo P.; Seung, H. Sebastian; Newsome, William T.

    2005-01-01

    The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys ("Macacca mulatta") foraged for juice rewards by making…

  6. Mapping The Variations Of Moho Depth And Poisson's Ratio In China With Receiver Function Analyses

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Niu, F.; Liu, R.; Huang, Z.; Chan, W.; Sun, L.

    2007-12-01

    the Tibetan plateau, the Moho could extend to over 80 km. The average Poisson's ratios are a little bit higher than the sub-regions to the east, but are rather lower than that inferred from Pn tomography. The azimuthal variations, as well as complicated crustal and upper mantle structures such as sediment and LVZ beneath some stations have also been observed from receiver functions. These results will assist to constitute initial models in linear and non-linear inversions to obtain a much finer 1-D crustal and upper mantle velocity model beneath each station.

  7. Assessment of Chinese sturgeon habitat suitability in the Yangtze River (China): Comparison of generalized additive model, data-driven fuzzy logic model, and preference curve model

    NASA Astrophysics Data System (ADS)

    Yi, Yujun; Sun, Jie; Zhang, Shanghong; Yang, Zhifeng

    2016-05-01

    To date, a wide range of models have been applied to evaluate aquatic habitat suitability. In this study, three models, including the expert knowledge-based preference curve model (PCM), data-driven fuzzy logic model (DDFL), and generalized additive model (GAM), are used on a common data set to compare their effectiveness and accuracy. The true skill statistic (TSS) and the area under the receiver operating characteristics curve (AUC) are used to evaluate the accuracy of the three models. The results indicate that the two data-based methods (DDFL and GAM) yield better accuracy than the expert knowledge-based PCM, and the GAM yields the best accuracy. There are minor differences in the suitable ranges of the physical habitat variables obtained from the three models. The hydraulic habitat suitability index (HHSI) calculated by the PCM is the largest, followed by the DDFL and then the GAM. The results illustrate that data-based models can describe habitat suitability more objectively and accurately when there are sufficient data. When field data are lacking, combining expertise with data-based models is recommended. When field data are difficult to obtain, an expert knowledge-based model can be used as a replacement for the data-based methods.

  8. Optimal Decay Rate of the Compressible Navier-Stokes-Poisson System in {mathbb {R}^3}

    NASA Astrophysics Data System (ADS)

    Li, Hai-Liang; Matsumura, Akitaka; Zhang, Guojing

    2010-05-01

    The compressible Navier-Stokes-Poisson (NSP) system is considered in {mathbb {R}^3} in the present paper, and the influences of the electric field of the internal electrostatic potential force governed by the self-consistent Poisson equation on the qualitative behaviors of solutions is analyzed. It is observed that the rotating effect of electric field affects the dispersion of fluids and reduces the time decay rate of solutions. Indeed, we show that the density of the NSP system converges to its equilibrium state at the same L 2-rate {(1+t)^{-frac {3}{4}}} or L ∞-rate (1 + t)-3/2 respectively as the compressible Navier-Stokes system, but the momentum of the NSP system decays at the L 2-rate {(1+t)^{-frac {1}{4}}} or L ∞-rate (1 + t)-1 respectively, which is slower than the L 2-rate {(1+t)^{-frac {3}{4}}} or L ∞-rate (1 + t)-3/2 for compressible Navier-Stokes system [Duan et al., in Math Models Methods Appl Sci 17:737-758, 2007; Liu and Wang, in Comm Math Phys 196:145-173, 1998; Matsumura and Nishida, in J Math Kyoto Univ 20:67-104, 1980] and the L ∞-rate (1 + t)- p with {p in (1, 3/2)} for irrotational Euler-Poisson system [Guo, in Comm Math Phys 195:249-265, 1998]. These convergence rates are shown to be optimal for the compressible NSP system.

  9. Possibilities of Preoperative Medical Models Made by 3D Printing or Additive Manufacturing

    PubMed Central

    2016-01-01

    Most of the 3D printing applications of preoperative models have been focused on dental and craniomaxillofacial area. The purpose of this paper is to demonstrate the possibilities in other application areas and give examples of the current possibilities. The approach was to communicate with the surgeons with different fields about their needs related preoperative models and try to produce preoperative models that satisfy those needs. Ten different kinds of examples of possibilities were selected to be shown in this paper and aspects related imaging, 3D model reconstruction, 3D modeling, and 3D printing were presented. Examples were heart, ankle, backbone, knee, and pelvis with different processes and materials. Software types required were Osirix, 3Data Expert, and Rhinoceros. Different 3D printing processes were binder jetting and material extrusion. This paper presents a wide range of possibilities related to 3D printing of preoperative models. Surgeons should be aware of the new possibilities and in most cases help from mechanical engineering side is needed. PMID:27433470

  10. Possibilities of Preoperative Medical Models Made by 3D Printing or Additive Manufacturing.

    PubMed

    Salmi, Mika

    2016-01-01

    Most of the 3D printing applications of preoperative models have been focused on dental and craniomaxillofacial area. The purpose of this paper is to demonstrate the possibilities in other application areas and give examples of the current possibilities. The approach was to communicate with the surgeons with different fields about their needs related preoperative models and try to produce preoperative models that satisfy those needs. Ten different kinds of examples of possibilities were selected to be shown in this paper and aspects related imaging, 3D model reconstruction, 3D modeling, and 3D printing were presented. Examples were heart, ankle, backbone, knee, and pelvis with different processes and materials. Software types required were Osirix, 3Data Expert, and Rhinoceros. Different 3D printing processes were binder jetting and material extrusion. This paper presents a wide range of possibilities related to 3D printing of preoperative models. Surgeons should be aware of the new possibilities and in most cases help from mechanical engineering side is needed. PMID:27433470

  11. Ten-year-old children strategies in mental addition: A counting model account.

    PubMed

    Thevenot, Catherine; Barrouillet, Pierre; Castel, Caroline; Uittenhove, Kim

    2016-01-01

    For more than 30 years, it has been admitted that individuals from the age of 10 mainly retrieve the answer of simple additions from long-term memory, at least when the sum does not exceed 10. Nevertheless, recent studies challenge this assumption and suggest that expert adults use fast, compacted and unconscious procedures in order to solve very simple problems such as 3+2. If this is true, automated procedures should be rooted in earlier strategies and therefore observable in their non-compacted form in children. Thus, contrary to the dominant theoretical position, children's behaviors should not reflect retrieval. This is precisely what we observed in analyzing the responses times of a sample of 42 10-year-old children who solved additions with operands from 1 to 9. Our results converge towards the conclusion that 10-year-old children still use counting procedures in order to solve non-tie problems involving operands from 2 to 4. Moreover, these counting procedures are revealed whatever the expertise of children, who differ only in their speed of execution. Therefore and contrary to the dominant position in the literature according to which children's strategies evolve from counting to retrieval, the key change in development of mental addition solving appears to be a shift from slow to quick counting procedures. PMID:26402647

  12. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models. PMID:22370995

  13. Experiments to Populate and Validate a Processing Model for Polyurethane Foam: Additional Data for Structural Foams.

    SciTech Connect

    Rao, Rekha R.; Celina, Mathias C.; Giron, Nicholas Henry; Long, Kevin Nicholas; Russick, Edward M.

    2015-01-01

    We are developing computational models to help understand manufacturing processes, final properties and aging of structural foam, polyurethane PMDI. Th e resulting model predictions of density and cure gradients from the manufacturing process will be used as input to foam heat transfer and mechanical models. BKC 44306 PMDI-10 and BKC 44307 PMDI-18 are the most prevalent foams used in structural parts. Experiments needed to parameterize models of the reaction kinetics and the equations of motion during the foam blowing stages were described for BKC 44306 PMDI-10 in the first of this report series (Mondy et al. 2014). BKC 44307 PMDI-18 is a new foam that will be used to make relatively dense structural supports via over packing. It uses a different catalyst than those in the BKC 44306 family of foams; hence, we expect that the reaction kineti cs models must be modified. Here we detail the experiments needed to characteriz e the reaction kinetics of BKC 44307 PMDI-18 and suggest parameters for the model based on these experiments. In additi on, the second part of this report describes data taken to provide input to the preliminary nonlinear visco elastic structural response model developed for BKC 44306 PMDI-10 foam. We show that the standard cu re schedule used by KCP does not fully cure the material, and, upon temperature elevation above 150 o C, oxidation or decomposition reactions occur that alter the composition of the foam. These findings suggest that achieving a fully cured foam part with this formulation may be not be possible through therma l curing. As such, visco elastic characterization procedures developed for curing thermosets can provide only approximate material properties, since the state of the material continuously evolves during tests.

  14. A Poisson resampling method for simulating reduced counts in nuclear medicine images

    NASA Astrophysics Data System (ADS)

    White, Duncan; Lawson, Richard S.

    2015-05-01

    Nuclear medicine computers now commonly offer resolution recovery and other software techniques which have been developed to improve image quality for images with low counts. These techniques potentially mean that these images can give equivalent clinical information to a full-count image. Reducing the number of counts in nuclear medicine images has the benefits of either allowing reduced activity to be administered or reducing acquisition times. However, because acquisition and processing parameters vary, each user should ideally evaluate the use of images with reduced counts within their own department, and this is best done by simulating reduced-count images from the original data. Reducing the counts in an image by division and rounding off to the nearest integer value, even if additional Poisson noise is added, is inadequate because it gives incorrect counting statistics. This technical note describes how, by applying Poisson resampling to the original raw data, simulated reduced-count images can be obtained while maintaining appropriate counting statistics. The authors have developed manufacturer independent software that can retrospectively generate simulated data with reduced counts from any acquired nuclear medicine image.

  15. ColDICE: A parallel Vlasov-Poisson solver using moving adaptive simplicial tessellation

    NASA Astrophysics Data System (ADS)

    Sousbie, Thierry; Colombi, Stéphane

    2016-09-01

    Resolving numerically Vlasov-Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the best way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65-67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a "warm" dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.

  16. Reducing model uncertainty effects in flexible manipulators through the addition of passive damping

    NASA Technical Reports Server (NTRS)

    Alberts, T. E.

    1987-01-01

    An important issue in the control of practical systems is the effect of model uncertainty on closed loop performance. This is of particular concern when flexible structures are to be controlled, due to the fact that states associated with higher frequency vibration modes are truncated in order to make the control problem tractable. Digital simulations of a single-link manipulator system are employed to demonstrate that passive damping added to the flexible member reduces adverse effects associated with model uncertainty. A controller was designed based on a model including only one flexible mode. This controller was applied to larger order systems to evaluate the effects of modal truncation. Simulations using a Linear Quadratic Regulator (LQR) design assuming full state feedback illustrate the effect of control spillover. Simulations of a system using output feedback illustrate the destabilizing effect of observation spillover. The simulations reveal that the system with passive damping is less susceptible to these effects than the untreated case.

  17. Short-range correlations control the G/K and Poisson ratios of amorphous solids and metallic glasses

    SciTech Connect

    Zaccone, Alessio; Terentjev, Eugene M.

    2014-01-21

    The bulk modulus of many amorphous materials, such as metallic glasses, behaves nearly in agreement with the assumption of affine deformation, namely that the atoms are displaced just by the amount prescribed by the applied strain. In contrast, the shear modulus behaves as for nonaffine deformations, with additional displacements due to the structural disorder which induce a marked material softening to shear. The consequence is an anomalously large ratio of the bulk modulus to the shear modulus for disordered materials characterized by dense atomic packing, but not for random networks with point atoms. We explain this phenomenon with a microscopic derivation of the elastic moduli of amorphous solids accounting for the interplay of nonaffinity and short-range particle correlations due to excluded volume. Short-range order is responsible for a reduction of the nonaffinity which is much stronger under compression, where the geometric coupling between nonaffinity and the deformation field is strong, whilst under shear this coupling is weak. Predictions of the Poisson ratio based on this model allow us to rationalize the trends as a function of coordination and atomic packing observed with many amorphous materials.

  18. Function-Space-Based Solution Scheme for the Size-Modified Poisson-Boltzmann Equation in Full-Potential DFT.

    PubMed

    Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten

    2016-08-01

    The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol. PMID:27323006

  19. Applying additive modeling and gradient boosting to assess the effects of watershed and reach characteristics on riverine assemblages

    USGS Publications Warehouse

    Maloney, Kelly O.; Schmid, Matthias; Weller, Donald E.

    2012-01-01

    Issues with ecological data (e.g. non-normality of errors, nonlinear relationships and autocorrelation of variables) and modelling (e.g. overfitting, variable selection and prediction) complicate regression analyses in ecology. Flexible models, such as generalized additive models (GAMs), can address data issues, and machine learning techniques (e.g. gradient boosting) can help resolve modelling issues. Gradient boosted GAMs do both. Here, we illustrate the advantages of this technique using data on benthic macroinvertebrates and fish from 1573 small streams in Maryland, USA.

  20. Parameter Estimation in Astronomy with Poisson-Distributed Data. 1; The (CHI)2(gamma) Statistic

    NASA Technical Reports Server (NTRS)

    Mighell, Kenneth J.

    1999-01-01

    Applying the standard weighted mean formula, [Sigma (sub i)n(sub i)ssigma(sub i, sup -2)], to determine the weighted mean of data, n(sub i), drawn from a Poisson distribution, will, on average, underestimate the true mean by approx. 1 for all true mean values larger than approx.3 when the common assumption is made that the error of the i th observation is sigma(sub i) = max square root of n(sub i), 1).This small, but statistically significant offset, explains the long-known observation that chi-square minimization techniques which use the modified Neyman'chi(sub 2) statistic, chi(sup 2, sub N) equivalent Sigma(sub i)((n(sub i) - y(sub i)(exp 2)) / max(n(sub i), 1), to compare Poisson - distributed data with model values, y(sub i), will typically predict a total number of counts that underestimates the true total by about 1 count per bin. Based on my finding that weighted mean of data drawn from a Poisson distribution can be determined using the formula [Sigma(sub i)[n(sub i) + min(n(sub i), 1)](n(sub i) + 1)(exp -1)] / [Sigma(sub i)(n(sub i) + 1)(exp -1))], I propose that a new chi(sub 2) statistic, chi(sup 2, sub gamma) equivalent, should always be used to analyze Poisson- distributed data in preference to the modified Neyman's chi(exp 2) statistic. I demonstrated the power and usefulness of,chi(sub gamma, sup 2) minimization by using two statistical fitting techniques and five chi(exp 2) statistics to analyze simulated X-ray power - low 15 - channel spectra with large and small counts per bin. I show that chi(sub gamma, sup 2) minimization with the Levenberg - Marquardt or Powell's method can produce excellent results (mean slope errors approx. less than 3%) with spectra having as few as 25 total counts.

  1. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 ke

  2. Three-dimensionally bonded spongy graphene material with super compressive elasticity and near-zero Poisson's ratio.

    PubMed

    Wu, Yingpeng; Yi, Ningbo; Huang, Lu; Zhang, Tengfei; Fang, Shaoli; Chang, Huicong; Li, Na; Oh, Jiyoung; Lee, Jae Ah; Kozlov, Mikhail; Chipara, Alin C; Terrones, Humberto; Xiao, Peishuang; Long, Guankui; Huang, Yi; Zhang, Fan; Zhang, Long; Lepró, Xavier; Haines, Carter; Lima, Márcio Dias; Lopez, Nestor Perea; Rajukumar, Lakshmy P; Elias, Ana L; Feng, Simin; Kim, Seon Jeong; Narayanan, N T; Ajayan, Pulickel M; Terrones, Mauricio; Aliev, Ali; Chu, Pengfei; Zhang, Zhong; Baughman, Ray H; Chen, Yongsheng

    2015-01-01

    It is a challenge to fabricate graphene bulk materials with properties arising from the nature of individual graphene sheets, and which assemble into monolithic three-dimensional structures. Here we report the scalable self-assembly of randomly oriented graphene sheets into additive-free, essentially homogenous graphene sponge materials that provide a combination of both cork-like and rubber-like properties. These graphene sponges, with densities similar to air, display Poisson's ratios in all directions that are near-zero and largely strain-independent during reversible compression to giant strains. And at the same time, they function as enthalpic rubbers, which can recover up to 98% compression in air and 90% in liquids, and operate between -196 and 900 °C. Furthermore, these sponges provide reversible liquid absorption for hundreds of cycles and then discharge it within seconds, while still providing an effective near-zero Poisson's ratio. PMID:25601131

  3. Additional Evidence Supporting a Model of Shallow, High-Speed Supergranulation

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Hanasoge, S. M.; Chakraborty, S.

    2014-01-01

    Recently, Duvall and Hanasoge (Solar Phys. 287, 71, 2013) found that large distance separation [delta] travel-time differences from a center to an annulus [deltat(sub oi)] implied a model of the average super granular cell that has a peak upflow of 240 ms(exp -1) at a depth of 2.3 Mm and a corresponding peak outward horizontal flow of 700 ms(exp -1) at a depth of 1.6 Mm. In the present work, this effect is further studied by measuring and modeling center-to-quadrant travel-time differences [deltat(sub qu)], which roughly agree with this model. Simulations are analyzed that show that such a model flow would lead to the expected travel-time differences. As a check for possible systematic errors, the center-to-annulus travel-time differences [deltat(sub oi)] are found not to vary with heliocentric angle. A consistency check finds an increase of deltat(sub oi) with the temporal frequency [?] by a factor of two, which is not predicted by the ray theory.

  4. Additional Evidence Supporting a Model of Shallow, High-Speed Supergranulation

    NASA Astrophysics Data System (ADS)

    Duvall, T. L.; Hanasoge, S. M.; Chakraborty, S.

    2014-09-01

    Recently, Duvall and Hanasoge ( Solar Phys. 287, 71, 2013) found that large-distance separation [Δ] travel-time differences from a center to an annulus [ δt oi] implied a model of the average supergranular cell that has a peak upflow of 240 m s-1 at a depth of 2.3 Mm and a corresponding peak outward horizontal flow of 700 m s-1 at a depth of 1.6 Mm. In the present work, this effect is further studied by measuring and modeling center-to-quadrant travel-time differences [ δt qu], which roughly agree with this model. Simulations are analyzed that show that such a model flow would lead to the expected travel-time differences. As a check for possible systematic errors, the center-to-annulus travel-time differences [ δt oi] are found not to vary with heliocentric angle. A consistency check finds an increase of δt oi with the temporal frequency [ ν] by a factor of two, which is not predicted by the ray theory.

  5. Modeled heating and surface erosion comparing motile (gas borne) and stationary (surface coating) inert particle additives

    SciTech Connect

    Buckingham, A.C.; Siekhaus, W.J.

    1982-09-27

    The unsteady, non-similar, chemically reactive, turbulent boundary layer equations are modified for gas plus dispersed solid particle mixtures, for gas phase turbulent combustion reactions and for heterogeneous gas-solid surface erosive reactions. The exterior (ballistic core) edge boundary conditions for the solutions are modified to include dispersed particle influences on core propellant combustion-generated turbulence levels, combustion reactants and products, and reaction-induced, non-isentropic mixture states. The wall surface (in this study it is always steel) is considered either bare or coated with a fixed particle coating which is conceptually non-reactive, insulative, and non-ablative. Two families of solutions are compared. These correspond to: (1) consideration of gas-borne, free-slip, almost spontaneously mobile (motile) solid particle additives which influence the turbulent heat transfer at the uncoated steel surface and, in contrast, (2) consideration of particle-free, gas phase turbulent heat transfer to the insulated surface coated by stationary particles. Significant differences in erosive heat transfer are found in comparing the two families of solutions over a substantial range of interior ballistic flow conditions. The most effective influences on reducing erosive heat transfer appear to favor mobile, gas-borne particle additives.

  6. Graph model for calculating the properties of saturated monoalcohols based on the additivity of energy terms

    NASA Astrophysics Data System (ADS)

    Grebeshkov, V. V.; Smolyakov, V. M.

    2012-05-01

    A 16-constant additive scheme was derived for calculating the physicochemical properties of saturated monoalcohols CH4O-C9H20O and decomposing the triangular numbers of the Pascal triangle based on the similarity of subgraphs in the molecular graphs (MGs) of the homologous series of these alcohols. It was shown, using this scheme for calculation of properties of saturated monoalcohols as an example, that each coefficient of the scheme (in other words, the number of methods to impose a chain of a definite length i 1, i 2, … on a molecular graph) is the result of the decomposition of the triangular numbers of the Pascal triangle. A linear dependence was found within the adopted classification of structural elements. Sixteen parameters of the schemes were recorded as linear combinations of 17 parameters. The enthalpies of vaporization L {298/K 0} of the saturated monoalcohols CH4O-C9H20O, for which there were no experimental data, were calculated. It was shown that the parameters are not chosen randomly when using the given procedure for constructing an additive scheme by decomposing the triangular numbers of the Pascal triangle.

  7. Can an energy balance model provide additional constraints on how to close the energy imbalance?

    PubMed

    Wohlfahrt, Georg; Widmoser, Peter

    2013-02-15

    Elucidating the causes for the energy imbalance, i.e. the phenomenon that eddy covariance latent and sensible heat fluxes fall short of available energy, is an outstanding problem in micrometeorology. This paper tests the hypothesis that the full energy balance, through incorporation of additional independent measurements which determine the driving forces of and resistances to energy transfer, provides further insights into the causes of the energy imbalance and additional constraints on energy balance closure options. Eddy covariance and auxiliary data from three different biomes were used to test five contrasting closure scenarios. The main result of our study is that except for nighttime, when fluxes were low and noisy, the full energy balance generally did not contain enough information to allow further insights into the causes of the imbalance and to constrain energy balance closure options. Up to four out of the five tested closure scenarios performed similarly and in up to 53% of all cases all of the tested closure scenarios resulted in plausible energy balance values. Our approach may though provide a sensible consistency check for eddy covariance energy flux measurements. PMID:24465072

  8. Brain, music, and non-Poisson renewal processes

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo

    2007-06-01

    In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.

  9. Poisson's Ratio and the Densification of Glass under High Pressure

    SciTech Connect

    Rouxel, T.; Ji, H.; Hammouda, T.; Moreac, A.

    2008-06-06

    Because of a relatively low atomic packing density, (C{sub g}) glasses experience significant densification under high hydrostatic pressure. Poisson's ratio ({nu}) is correlated to C{sub g} and typically varies from 0.15 for glasses with low C{sub g} such as amorphous silica to 0.38 for close-packed atomic networks such as in bulk metallic glasses. Pressure experiments were conducted up to 25 GPa at 293 K on silica, soda-lime-silica, chalcogenide, and bulk metallic glasses. We show from these high-pressure data that there is a direct correlation between {nu} and the maximum post-decompression density change.

  10. Fission meter and neutron detection using poisson distribution comparison

    SciTech Connect

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  11. The Poisson equation at second order in relativistic cosmology

    SciTech Connect

    Hidalgo, J.C.; Christopherson, Adam J.; Malik, Karim A. E-mail: Adam.Christopherson@nottingham.ac.uk

    2013-08-01

    We calculate the relativistic constraint equation which relates the curvature perturbation to the matter density contrast at second order in cosmological perturbation theory. This relativistic ''second order Poisson equation'' is presented in a gauge where the hydrodynamical inhomogeneities coincide with their Newtonian counterparts exactly for a perfect fluid with constant equation of state. We use this constraint to introduce primordial non-Gaussianity in the density contrast in the framework of General Relativity. We then derive expressions that can be used as the initial conditions of N-body codes for structure formation which probe the observable signature of primordial non-Gaussianity in the statistics of the evolved matter density field.

  12. Additional comments on the assumption of homogenous survival rates in modern bird banding estimation models

    USGS Publications Warehouse

    Nichols, J.D.; Stokes, S.L.; Hines, J.E.; Conroy, M.J.

    1982-01-01

    We examined the problem of heterogeneous survival and recovery rates in bird banding estimation models. We suggest that positively correlated subgroup survival and recovery probabilities may result from winter banding operations and that this situation will produce positively biased survival rate estimates. The magnitude of the survival estimate bias depends on the proportion of the population in each subgroup. Power of the suggested goodness-of-fit test to reject the inappropriate model for heterogeneous data sets was low for all situations examined and was poorest for positively related subgroup survival and recovery rates. Despite the magnitude of some of the biases reported and the relative inability to detect heterogeneity, we suggest that levels of heterogeneity normally encountered in real data sets will produce relatively small biases of average survival rates.

  13. Dynamic modeling used for the addition of robotic operation to the Advanced Servomanipulator teleoperator

    SciTech Connect

    Corbett, G.K.; Bailey, J.M.

    1989-01-01

    A robotic mode has been added to the Advanced Servomanipulator (ASM), a 6 degree-of-freedom master/slave teleoperator. In order to understand the requirements for implementation of robotics on an arm designed for teleoperation, a dynamic simulation of the ASM slave arm was developed. The ASM model and modifications of the control system for robotic operation are presented. 7 refs., 3 figs.

  14. Additive protection by LDR and FGF21 treatment against diabetic nephropathy in type 2 diabetes model

    PubMed Central

    Shao, Minglong; Yu, Lechu; Zhang, Fangfang; Lu, Xuemian; Li, Xiaokun; Cheng, Peng; Lin, Xiufei; He, Luqing; Jin, Shunzi; Tan, Yi; Yang, Hong; Cai, Lu

    2015-01-01

    The onset of diabetic nephropathy (DN) is associated with both systemic and renal changes. Fibroblast growth factor (FGF)-21 prevents diabetic complications mainly by improving systemic metabolism. In addition, low-dose radiation (LDR) protects mice from DN directly by preventing renal oxidative stress and inflammation. In the present study, we tried to define whether the combination of FGF21 and LDR could further prevent DN by blocking its systemic and renal pathogeneses. To this end, type 2 diabetes was induced by feeding a high-fat diet for 12 wk followed by a single dose injection of streptozotocin. Diabetic mice were exposed to 50 mGy LDR every other day for 4 wk with and without 1.5 mg/kg FGF21 daily for 8 wk. The changes in systemic parameters, including blood glucose levels, lipid profiles, and insulin resistance, as well as renal pathology, were examined. Diabetic mice exhibited renal dysfunction and pathological abnormalities, all of which were prevented significantly by LDR and/or FGF21; the best effects were observed in the group that received the combination treatment. Our studies revealed that the additive renal protection conferred by the combined treatment against diabetes-induced renal fibrosis, inflammation, and oxidative damage was associated with the systemic improvement of hyperglycemia, hyperlipidemia, and insulin resistance. These results suggest that the combination treatment with LDR and FGF21 prevented DN more efficiently than did either treatment alone. The mechanism behind these protective effects could be attributed to the suppression of both systemic and renal pathways. PMID:25968574

  15. Comparing the performance of geostatistical models with additional information from covariates for sewage plume characterization.

    PubMed

    Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia

    2015-04-01

    In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion. PMID:25345922

  16. Additional application of the NASCAP code. Volume 2: SEPS, ion thruster neutralization and electrostatic antenna model

    NASA Technical Reports Server (NTRS)

    Katz, I.; Cassidy, J. J.; Mandell, M. J.; Parks, D. E.; Schnuelle, G. W.; Stannard, P. R.; Steen, P. G.

    1981-01-01

    The interactions of spacecraft systems with the surrounding plasma environment were studied analytically for three cases of current interest: calculating the impact of spacecraft generated plasmas on the main power system of a baseline solar electric propulsion stage (SEPS), modeling the physics of the neutralization of an ion thruster beam by a plasma bridge, and examining the physical and electrical effects of orbital ambient plasmas on the operation of an electrostatically controlled membrane mirror. In order to perform these studies, the NASA charging analyzer program (NASCAP) was used as well as several other computer models and analytical estimates. The main result of the SEPS study was to show how charge exchange ion expansion can create a conducting channel between the thrusters and the solar arrays. A fluid-like model was able to predict plasma potentials and temperatures measured near the main beam of an ion thruster and in the vicinity of a hollow cathode neutralizer. Power losses due to plasma currents were shown to be substantial for several proposed electrostatic antenna designs.

  17. Some Poisson structures and Lax equations associated with the Toeplitz lattice and the Schur lattice

    NASA Astrophysics Data System (ADS)

    Lemarie, Caroline

    2016-01-01

    The Toeplitz lattice is a Hamiltonian system whose Poisson structure is known. In this paper, we unveil the origins of this Poisson structure and derive from it the associated Lax equations for this lattice. We first construct a Poisson subvariety H n of GL n (C), which we view as a real or complex Poisson-Lie group whose Poisson structure comes from a quadratic R-bracket on gl n (C) for a fixed R-matrix. The existence of Hamiltonians, associated to the Toeplitz lattice for the Poisson structure on H n , combined with the properties of the quadratic R-bracket allow us to give explicit formulas for the Lax equation. Then we derive from it the integrability in the sense of Liouville of the Toeplitz lattice. When we view the lattice as being defined over R, we can construct a Poisson subvariety H n τ of U n which is itself a Poisson-Dirac subvariety of GL n R (C). We then construct a Hamiltonian for the Poisson structure induced on H n τ , corresponding to another system which derives from the Toeplitz lattice the modified Schur lattice. Thanks to the properties of Poisson-Dirac subvarieties, we give an explicit Lax equation for the new system and derive from it a Lax equation for the Schur lattice. We also deduce the integrability in the sense of Liouville of the modified Schur lattice.

  18. Addition of Tropospheric Chemistry and Aerosols to the NCAR Community Climate System Model

    SciTech Connect

    Cameron-Smith, P; Lamarque, J; Connell, P; Chuang, C; Rotman, D; Taylor, J

    2005-11-14

    Atmospheric chemistry and aerosols have several important roles in climate change. They affect the Earth's radiative balance directly: cooling the earth by scattering sunlight (aerosols) and warming the Earth by trapping the Earth's thermal radiation (methane, ozone, nitrous oxide, and CFCs are greenhouse gases). Atmospheric chemistry and aerosols also impact many other parts of the climate system: modifying cloud properties (aerosols can be cloud condensation nuclei), fertilizing the biosphere (nitrogen species and soil dust), and damaging the biosphere (acid rain and ozone damage). In order to understand and quantify the effects of atmospheric chemistry and aerosols on the climate and the biosphere in the future, it is necessary to incorporate atmospheric chemistry and aerosols into state-of-the-art climate system models. We have taken several important strides down that path. Working with the latest NCAR Community Climate System Model (CCSM), we have incorporated a state-of-the-art atmospheric chemistry model to simulate tropospheric ozone. Ozone is not just a greenhouse gas, it damages biological systems including lungs, tires, and crops. Ozone chemistry is also central to the oxidizing power of the atmosphere, which destroys a lot of pollutants in the atmosphere (which is a good thing). We have also implemented a fast chemical mechanism that has high fidelity with the full mechanism, for significantly reduced computational cost (to facilitate millennium scale simulations). Sulfate aerosols have a strong effect on climate by reflecting sunlight and modifying cloud properties. So in order to simulate the sulfur cycle more fully in CCSM simulations, we have linked the formation of sulfate aerosols to the oxidizing power of the atmosphere calculated by the ozone mechanisms, and to dimethyl sulfide emissions from the ocean ecosystem in the model. Since the impact of sulfate aerosols depends on the relative abundance of other aerosols in the atmosphere, we also

  19. A MIXTURE OF SEVEN ANTIANDROGENIC COMPOUNDS ELICITS ADDITIVE EFFECTS ON THE MALE RAT REPRODUCTIVE TRACT THAT CORRESPOND TO MODELED PREDICTIONS

    EPA Science Inventory

    The main objectives of this study were to: (1) determine whether dissimilar antiandrogenic compounds display additive effects when present in combination and (2) to assess the ability of modelling approaches to accurately predict these mixture effects based on data from single ch...

  20. The Job Demands-Resources Model: An Analysis of Additive and Joint Effects of Demands and Resources

    ERIC Educational Resources Information Center

    Hu, Qiao; Schaufeli, Wilmar B.; Taris, Toon W.

    2011-01-01

    The present study investigated the additive, synergistic, and moderating effects of job demands and job resources on well-being (burnout and work engagement) and organizational outcomes, as specified by the Job Demands-Resources (JD-R) model. A survey was conducted among two Chinese samples: 625 blue collar workers and 761 health professionals. A…

  1. Family, Neighborhood, and Peer Characteristics as Predictors of Child Adjustment: A Longitudinal Analysis of Additive and Mediation Models

    ERIC Educational Resources Information Center

    Criss, Michael M.; Shaw, Daniel S.; Moilanen, Kristin L.; Hitchings, Julia E.; Ingoldsby, Erin M.

    2009-01-01

    The purpose of this study was to test direct, additive, and mediation models involving family, neighborhood, and peer factors in relation to emerging antisocial behavior and social skills. Neighborhood danger, maternal depressive symptoms, and supportive parenting were assessed in early childhood. Peer group acceptance was measured in middle…

  2. Determination of oral mucosal Poisson's ratio and coefficient of friction from in-vivo contact pressure measurements.

    PubMed

    Chen, Junning; Suenaga, Hanako; Hogg, Michael; Li, Wei; Swain, Michael; Li, Qing

    2016-01-01

    Despite their considerable importance to biomechanics, there are no existing methods available to directly measure apparent Poisson's ratio and friction coefficient of oral mucosa. This study aimed to develop an inverse procedure to determine these two biomechanical parameters by utilizing in vivo experiment of contact pressure between partial denture and beneath mucosa through nonlinear finite element (FE) analysis and surrogate response surface (RS) modelling technique. First, the in vivo denture-mucosa contact pressure was measured by a tactile electronic sensing sheet. Second, a 3D FE model was constructed based on the patient CT images. Third, a range of apparent Poisson's ratios and the coefficients of friction from literature was considered as the design variables in a series of FE runs for constructing a RS surrogate model. Finally, the discrepancy between computed in silico and measured in vivo results was minimized to identify the best matching Poisson's ratio and coefficient of friction. The established non-invasive methodology was demonstrated effective to identify such biomechanical parameters of oral mucosa and can be potentially used for determining the biomaterial properties of other soft biological tissues. PMID:26024011

  3. A Combined MPI-CUDA Parallel Solution of Linear and Nonlinear Poisson-Boltzmann Equation

    PubMed Central

    Colmenares, José; Galizia, Antonella; Ortiz, Jesús; Clematis, Andrea; Rocchia, Walter

    2014-01-01

    The Poisson-Boltzmann equation models the electrostatic potential generated by fixed charges on a polarizable solute immersed in an ionic solution. This approach is often used in computational structural biology to estimate the electrostatic energetic component of the assembly of molecular biological systems. In the last decades, the amount of data concerning proteins and other biological macromolecules has remarkably increased. To fruitfully exploit these data, a huge computational power is needed as well as software tools capable of exploiting it. It is therefore necessary to move towards high performance computing and to develop proper parallel implementations of already existing and of novel algorithms. Nowadays, workstations can provide an amazing computational power: up to 10 TFLOPS on a single machine equipped with multiple CPUs and accelerators such as Intel Xeon Phi or GPU devices. The actual obstacle to the full exploitation of modern heterogeneous resources is efficient parallel coding and porting of software on such architectures. In this paper, we propose the implementation of a full Poisson-Boltzmann solver based on a finite-difference scheme using different and combined parallel schemes and in particular a mixed MPI-CUDA implementation. Results show great speedups when using the two schemes, achieving an 18.9x speedup using three GPUs. PMID:25013789

  4. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    PubMed Central

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample’s high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use. PMID:27283980

  5. Generalized Poisson-Kac Processes: Basic Properties and Implications in Extended Thermodynamics and Transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2016-04-01

    We introduce a new class of stochastic processes in Rn,{{{mathbb R}}^n}, referred to as generalized Poisson-Kac (GPK) processes, that generalizes the Poisson-Kac telegrapher's random motion in higher dimensions. These stochastic processes possess finite propagation velocity, almost everywhere smooth trajectories, and converge in the Kac limit to Brownian motion. GPK processes are defined by coupling the selection of a bounded velocity vector from a family of N distinct ones with a Markovian dynamics controlling probabilistically this selection. This model can be used as a probabilistic tool for a stochastically consistent formulation of extended thermodynamic theories far from equilibrium.

  6. Optimal inversion of the generalized Anscombe transformation for Poisson-Gaussian noise.

    PubMed

    Mäkitalo, Markku; Foi, Alessandro

    2013-01-01

    Many digital imaging devices operate by successive photon-to-electron, electron-to-voltage, and voltage-to-digit conversions. These processes are subject to various signal-dependent errors, which are typically modeled as Poisson-Gaussian noise. The removal of such noise can be effected indirectly by applying a variance-stabilizing transformation (VST) to the noisy data, denoising the stabilized data with a Gaussian denoising algorithm, and finally applying an inverse VST to the denoised data. The generalized Anscombe transformation (GAT) is often used for variance stabilization, but its unbiased inverse transformation has not been rigorously studied in the past. We introduce the exact unbiased inverse of the GAT and show that it plays an integral part in ensuring accurate denoising results. We demonstrate that this exact inverse leads to state-of-the-art results without any notable increase in the computational complexity compared to the other inverses. We also show that this inverse is optimal in the sense that it can be interpreted as a maximum likelihood inverse. Moreover, we thoroughly analyze the behavior of the proposed inverse, which also enables us to derive a closed-form approximation for it. This paper generalizes our work on the exact unbiased inverse of the Anscombe transformation, which we have presented earlier for the removal of pure Poisson noise. PMID:22692910

  7. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    NASA Astrophysics Data System (ADS)

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample’s high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  8. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use. PMID:27283980

  9. A combined MPI-CUDA parallel solution of linear and nonlinear Poisson-Boltzmann equation.

    PubMed

    Colmenares, José; Galizia, Antonella; Ortiz, Jesús; Clematis, Andrea; Rocchia, Walter

    2014-01-01

    The Poisson-Boltzmann equation models the electrostatic potential generated by fixed charges on a polarizable solute immersed in an ionic solution. This approach is often used in computational structural biology to estimate the electrostatic energetic component of the assembly of molecular biological systems. In the last decades, the amount of data concerning proteins and other biological macromolecules has remarkably increased. To fruitfully exploit these data, a huge computational power is needed as well as software tools capable of exploiting it. It is therefore necessary to move towards high performance computing and to develop proper parallel implementations of already existing and of novel algorithms. Nowadays, workstations can provide an amazing computational power: up to 10 TFLOPS on a single machine equipped with multiple CPUs and accelerators such as Intel Xeon Phi or GPU devices. The actual obstacle to the full exploitation of modern heterogeneous resources is efficient parallel coding and porting of software on such architectures. In this paper, we propose the implementation of a full Poisson-Boltzmann solver based on a finite-difference scheme using different and combined parallel schemes and in particular a mixed MPI-CUDA implementation. Results show great speedups when using the two schemes, achieving an 18.9x speedup using three GPUs. PMID:25013789

  10. Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation

    NASA Astrophysics Data System (ADS)

    Blumenthal, Benjamin J.; Zhan, Hongbin

    2016-08-01

    We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.

  11. Poisson regression analysis of mortality among male workers at a thorium-processing plant

    SciTech Connect

    Liu, Zhiyuan; Lee, Tze-San; Kotek, T.J.

    1991-12-31

    Analyses of mortality among a cohort of 3119 male workers employed between 1915 and 1973 at a thorium-processing plant were updated to the end of 1982. Of the whole group, 761 men were deceased and 2161 men were still alive, while 197 men were lost to follow-up. A total of 250 deaths was added to the 511 deaths observed in the previous study. The standardized mortality ratio (SMR) for all causes of death was 1.12 with 95% confidence interval (CI) of 1.05-1.21. The SMRs were also significantly increased for all malignant neoplasms (SMR = 1.23, 95% CI = 1.04-1.43) and lung cancer (SMR = 1.36, 95% CI = 1.02-1.78). Poisson regression analysis was employed to evaluate the joint effects of job classification, duration of employment, time since first employment, age and year at first employment on mortality of all malignant neoplasms and lung cancer. A comparison of internal and external analyses with the Poisson regression model was also conducted and showed no obvious difference in fitting the data on lung cancer mortality of the thorium workers. The results of the multivariate analysis showed that there was no significant effect of all the study factors on mortality due to all malignant neoplasms and lung cancer. Therefore, further study is needed for the former thorium workers.

  12. Addition of missing loops and domains to protein models by x-ray solution scattering.

    PubMed Central

    Petoukhov, Maxim V; Eady, Nigel A J; Brown, Katherine A; Svergun, Dmitri I

    2002-01-01

    Inherent flexibility and conformational heterogeneity in proteins can often result in the absence of loops and even entire domains in structures determined by x-ray crystallographic or NMR methods. X-ray solution scattering offers the possibility of obtaining complementary information regarding the structures of these disordered protein regions. Methods are presented for adding missing loops or domains by fixing a known structure and building the unknown regions to fit the experimental scattering data obtained from the entire particle. Simulated annealing was used to minimize a scoring function containing the discrepancy between the experimental and calculated patterns and the relevant penalty terms. In low-resolution models where interface location between known and unknown parts is not available, a gas of dummy residues represents the missing domain. In high-resolution models where the interface is known, loops or domains are represented as interconnected chains (or ensembles of residues with spring forces between the C(alpha) atoms), attached to known position(s) in the available structure. Native-like folds of missing fragments can be obtained by imposing residue-specific constraints. After validation in simulated examples, the methods have been applied to add missing loops or domains to several proteins where partial structures were available. PMID:12496082

  13. Additive SMILES-Based Carcinogenicity Models: Probabilistic Principles in the Search for Robust Predictions

    PubMed Central

    Toropov, Andrey A.; Toropova, Alla P.; Benfenati, Emilio

    2009-01-01

    Optimal descriptors calculated with the simplified molecular input line entry system (SMILES) have been utilized in modeling of carcinogenicity as continuous values (logTD50). These descriptors can be calculated using correlation weights of SMILES attributes calculated by the Monte Carlo method. A considerable subset of these attributes includes rare attributes. The use of these rare attributes can lead to overtraining. One can avoid the influence of the rare attributes if their correlation weights are fixed to zero. A function, limS, has been defined to identify rare attributes. The limS defines the minimum number of occurrences in the set of structures of the training (subtraining) set, to accept attributes as usable. If an attribute is present less than limS, it is considered “rare”, and thus not used. Two systems of building up models were examined: 1. classic training-test system; 2. balance of correlations for the subtraining and calibration sets (together, they are the original training set: the function of the calibration set is imitation of a preliminary test set). Three random splits into subtraining, calibration, and test sets were analysed. Comparison of abovementioned systems has shown that balance of correlations gives more robust prediction of the carcinogenicity for all three splits (split 1: rtest2=0.7514, stest=0.684; split 2: rtest2=0.7998, stest=0.600; split 3: rtest2=0.7192, stest=0.728). PMID:19742127

  14. Kinetic modeling of the oxidative degradation of additive free PE in bleach disinfected water

    NASA Astrophysics Data System (ADS)

    Mikdam, Aïcha; Colin, Xavier; Billon, Noëlle; Minard, Gaëlle

    2016-05-01

    The chemical interactions between PE and bleach were studied at 60°C in immersion in bleach solutions kept at a free chlorine concentration of 100 ppm and a pH of 5 or 7.2. It was found that the polymer undergoes a severe oxidation from the earliest weeks of exposure, in a superficial layer whose thickness (of about 50-70 µm) is almost independent of the pH value, although the superficial oxidation rate is faster in acidic than in neutral medium. Oxidation leads to the formation and accumulation of a large variety of carbonyl products (mostly ketones and carboxylic acids) and, after a few weeks, to a decrease in the average molar mass due to the large predominance of chain scissions over crosslinking. A scenario was elaborated for explaining such unexpected results. According to this scenario, the non-ionic molecules (Cl2 and ClOH) formed from the disinfectant in the water phase, would migrate deeply into PE and dissociate into highly reactive radicals (Cl• and HO•) in order to initiate a radical chain oxidation. A kinetic model was derived from this scenario for predicting the general trends of the oxidation kinetics and its dependence on environmental factors such as temperature, free chlorine concentration and pH. The validity of this model was successfully checked by comparing the numerical simulations with experimental data.

  15. Synthesis, Characterization, Molecular Modeling, and DNA Interaction Studies of Copper Complex Containing Food Additive Carmoisine Dye.

    PubMed

    Shahabadi, Nahid; Akbari, Alireza; Jamshidbeigi, Mina; Khodarahmi, Reza

    2016-06-01

    A copper complex of carmoisine dye; [Cu(carmoisine)2(H2O)2]; was synthesized and characterized by using physico-chemical and spectroscopic methods. The binding of this complex with calf thymus (ct) DNA was investigated by circular dichroism, absorption studies, emission spectroscopy, and viscosity measurements. UV-vis results confirmed that the Cu complex interacted with DNA to form a ground-state complex and the observed binding constant (2× 10(4) M(-1)) is more in keeping with the groove bindings with DNA. Furthermore, the viscosity measurement result showed that the addition of complex causes no significant change on DNA viscosity and it indicated that the intercalation mode is ruled out. The thermodynamic parameters are calculated by van't Hoff equation, which demonstrated that hydrogen bonds and van der Waals interactions played major roles in the reaction. The results of circular dichroism (CD) suggested that the complex can change the conformation of DNA from B-like form toward A-like conformation. The cytotoxicity studies of the carmoisine dye and its copper complex indicated that both of them had anticancer effects on HT-29 (colon cancer) cell line and they may be new candidates for treatment of the colon cancer. PMID:27152751

  16. Effect of Hydrogen Addition on Methane HCCI Engine Ignition Timing and Emissions Using a Multi-zone Model

    NASA Astrophysics Data System (ADS)

    Wang, Zi-han; Wang, Chun-mei; Tang, Hua-xin; Zuo, Cheng-ji; Xu, Hong-ming

    2009-06-01

    Ignition timing control is of great importance in homogeneous charge compression ignition engines. The effect of hydrogen addition on methane combustion was investigated using a CHEMKIN multi-zone model. Results show that hydrogen addition advances ignition timing and enhances peak pressure and temperature. A brief analysis of chemical kinetics of methane blending hydrogen is also performed in order to investigate the scope of its application, and the analysis suggests that OH radical plays an important role in the oxidation. Hydrogen addition increases NOx while decreasing HC and CO emissions. Exhaust gas recirculation (EGR) also advances ignition timing; however, its effects on emissions are generally the opposite. By adjusting the hydrogen addition and EGR rate, the ignition timing can be regulated with a low emission level. Investigation into zones suggests that NOx is mostly formed in core zones while HC and CO mostly originate in the crevice and the quench layer.

  17. A Tubular Biomaterial Construct Exhibiting a Negative Poisson's Ratio.

    PubMed

    Lee, Jin Woo; Soman, Pranav; Park, Jeong Hun; Chen, Shaochen; Cho, Dong-Woo

    2016-01-01

    Developing functional small-diameter vascular grafts is an important objective in tissue engineering research. In this study, we address the problem of compliance mismatch by designing and developing a 3D tubular construct that has a negative Poisson's ratio νxy (NPR). NPR constructs have the unique ability to expand transversely when pulled axially, thereby resulting in a highly-compliant tubular construct. In this work, we used projection stereolithography to 3D-print a planar NPR sheet composed of photosensitive poly(ethylene) glycol diacrylate biomaterial. We used a step-lithography exposure and a stitch process to scale up the projection printing process, and used the cut-missing rib unit design to develop a centimeter-scale NPR sheet, which was rolled up to form a tubular construct. The constructs had Poisson's ratios of -0.6 ≤ νxy ≤ -0.1. The NPR construct also supports higher cellular adhesion than does the construct that has positive νxy. Our NPR design offers a significant advance in the development of highly-compliant vascular grafts. PMID:27232181

  18. Assessent of elliptic solvers for the pressure Poisson equation

    NASA Astrophysics Data System (ADS)

    Strodtbeck, J. P.; Polly, J. B.; McDonough, J. M.

    2008-11-01

    It is well known that as much as 80% of the total arithmetic needed for a solution of the incompressible Navier--Stokes equations can be expended for solving the pressure Poisson equation, and this has long been one of the prime motivations for study of elliptic solvers. In recent years various Krylov-subspace methods have begun to receive wide use because of their rapid convergence rates and automatic generation of iteration parameters. However, it is actually total floating-point arithmetic operations that must be of concern when selecting a solver for CFD, and not simply required number of iterations. In the present study we recast speed of convergence for typical CFD pressure Poisson problems in terms of CPU time spent on floating-point arithmetic and demonstrate that in many cases simple successive-overrelaxation (SOR) methods are as effective as some of the popular Krylov-subspace techniques such as BiCGStab(l) provided optimal SOR iteration parameters are employed; furthermore, SOR procedures require significantly less memory. We then describe some techniques for automatically predicting optimal SOR parameters.

  19. The multisensor PHD filter: II. Erroneous solution via Poisson magic

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald

    2009-05-01

    The theoretical foundation for the probability hypothesis density (PHD) filter is the FISST multitarget differential and integral calculus. The "core" PHD filter presumes a single sensor. Theoretically rigorous formulas for the multisensor PHD filter can be derived using the FISST calculus, but are computationally intractable. A less theoretically desirable solution-the iterated-corrector approximation-must be used instead. Recently, it has been argued that an "elementary" methodology, the "Poisson-intensity approach," renders FISST obsolete. It has further been claimed that the iterated-corrector approximation is suspect, and in its place an allegedly superior "general multisensor intensity filter" has been proposed. In this and a companion paper I demonstrate that it is these claims which are erroneous. The companion paper introduces formulas for the actual "general multisensor intensity filter." In this paper I demonstrate that (1) the "general multisensor intensity filter" fails in important special cases; (2) it will perform badly in even the easiest multitarget tracking problems; and (3) these rather serious missteps suggest that the "Poisson-intensity approach" is inherently faulty.

  20. Poisson's equation solution of Coulomb integrals in atoms and molecules

    NASA Astrophysics Data System (ADS)

    Weatherford, Charles A.; Red, Eddie; Joseph, Dwayne; Hoggan, Philip

    The integral bottleneck in evaluating molecular energies arises from the two-electron contributions. These are difficult and time-consuming to evaluate, especially over exponential type orbitals, used here to ensure the correct behaviour of atomic orbitals. In this work, it is shown that the two-centre Coulomb integrals involved can be expressed as one-electron kinetic-energy-like integrals. This is accomplished using the fact that the Coulomb operator is a Green's function of the Laplacian. The ensuing integrals may be further simplified by defining Coulomb forms for the one-electron potential satisfying Poisson's equation therein. A sum of overlap integrals with the atomic orbital energy eigenvalue as a factor is then obtained to give the Coulomb energy. The remaining questions of translating orbitals involved in three and four centre integrals and the evaluation of exchange energy are also briefly discussed. The summation coefficients in Coulomb forms are evaluated using the LU decomposition. This algorithm is highly parallel. The Poisson method may be used to calculate Coulomb energy integrals efficiently. For a single processor, gains of CPU time for a given chemical accuracy exceed a factor of 40. This method lends itself to evaluation on a parallel computer.