Sample records for zero-inflated poisson models

  1. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  2. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  3. Marginalized zero-inflated Poisson models with missing covariates.

    PubMed

    Benecha, Habtamu K; Preisser, John S; Divaris, Kimon; Herring, Amy H; Das, Kalyan

    2018-05-11

    Unlike zero-inflated Poisson regression, marginalized zero-inflated Poisson (MZIP) models for counts with excess zeros provide estimates with direct interpretations for the overall effects of covariates on the marginal mean. In the presence of missing covariates, MZIP and many other count data models are ordinarily fitted using complete case analysis methods due to lack of appropriate statistical methods and software. This article presents an estimation method for MZIP models with missing covariates. The method, which is applicable to other missing data problems, is illustrated and compared with complete case analysis by using simulations and dental data on the caries preventive effects of a school-based fluoride mouthrinse program. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  5. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  6. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    PubMed

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  7. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach

    PubMed Central

    Mohammadi, Tayeb; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493

  8. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population

    PubMed Central

    Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-01-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  9. IRT-ZIP Modeling for Multivariate Zero-Inflated Count Data

    ERIC Educational Resources Information Center

    Wang, Lijuan

    2010-01-01

    This study introduces an item response theory-zero-inflated Poisson (IRT-ZIP) model to investigate psychometric properties of multiple items and predict individuals' latent trait scores for multivariate zero-inflated count data. In the model, two link functions are used to capture two processes of the zero-inflated count data. Item parameters are…

  10. Structural zeroes and zero-inflated models.

    PubMed

    He, Hua; Tang, Wan; Wang, Wenjuan; Crits-Christoph, Paul

    2014-08-01

    In psychosocial and behavioral studies count outcomes recording the frequencies of the occurrence of some health or behavior outcomes (such as the number of unprotected sexual behaviors during a period of time) often contain a preponderance of zeroes because of the presence of 'structural zeroes' that occur when some subjects are not at risk for the behavior of interest. Unlike random zeroes (responses that can be greater than zero, but are zero due to sampling variability), structural zeroes are usually very different, both statistically and clinically. False interpretations of results and study findings may result if differences in the two types of zeroes are ignored. However, in practice, the status of the structural zeroes is often not observed and this latent nature complicates the data analysis. In this article, we focus on one model, the zero-inflated Poisson (ZIP) regression model that is commonly used to address zero-inflated data. We first give a brief overview of the issues of structural zeroes and the ZIP model. We then given an illustration of ZIP with data from a study on HIV-risk sexual behaviors among adolescent girls. Sample codes in SAS and Stata are also included to help perform and explain ZIP analyses.

  11. Classifying next-generation sequencing data using a zero-inflated Poisson model.

    PubMed

    Zhou, Yan; Wan, Xiang; Zhang, Baoxue; Tong, Tiejun

    2018-04-15

    With the development of high-throughput techniques, RNA-sequencing (RNA-seq) is becoming increasingly popular as an alternative for gene expression analysis, such as RNAs profiling and classification. Identifying which type of diseases a new patient belongs to with RNA-seq data has been recognized as a vital problem in medical research. As RNA-seq data are discrete, statistical methods developed for classifying microarray data cannot be readily applied for RNA-seq data classification. Witten proposed a Poisson linear discriminant analysis (PLDA) to classify the RNA-seq data in 2011. Note, however, that the count datasets are frequently characterized by excess zeros in real RNA-seq or microRNA sequence data (i.e. when the sequence depth is not enough or small RNAs with the length of 18-30 nucleotides). Therefore, it is desired to develop a new model to analyze RNA-seq data with an excess of zeros. In this paper, we propose a Zero-Inflated Poisson Logistic Discriminant Analysis (ZIPLDA) for RNA-seq data with an excess of zeros. The new method assumes that the data are from a mixture of two distributions: one is a point mass at zero, and the other follows a Poisson distribution. We then consider a logistic relation between the probability of observing zeros and the mean of the genes and the sequencing depth in the model. Simulation studies show that the proposed method performs better than, or at least as well as, the existing methods in a wide range of settings. Two real datasets including a breast cancer RNA-seq dataset and a microRNA-seq dataset are also analyzed, and they coincide with the simulation results that our proposed method outperforms the existing competitors. The software is available at http://www.math.hkbu.edu.hk/∼tongt. xwan@comp.hkbu.edu.hk or tongt@hkbu.edu.hk. Supplementary data are available at Bioinformatics online.

  12. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    USGS Publications Warehouse

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  13. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  14. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate

  15. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    PubMed

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.

  16. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.

  17. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  19. Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions

    ERIC Educational Resources Information Center

    Desjardins, Christopher David

    2016-01-01

    The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…

  20. Conditional modeling of antibody titers using a zero-inflated poisson random effects model: application to Fabrazyme.

    PubMed

    Bonate, Peter L; Sung, Crystal; Welch, Karen; Richards, Susan

    2009-10-01

    Patients that are exposed to biotechnology-derived therapeutics often develop antibodies to the therapeutic, the magnitude of which is assessed by measuring antibody titers. A statistical approach for analyzing antibody titer data conditional on seroconversion is presented. The proposed method is to first transform the antibody titer data based on a geometric series using a common ratio of 2 and a scale factor of 50 and then analyze the exponent using a zero-inflated or hurdle model assuming a Poisson or negative binomial distribution with random effects to account for patient heterogeneity. Patient specific covariates can be used to model the probability of developing an antibody response, i.e., seroconversion, as well as the magnitude of the antibody titer itself. The method was illustrated using antibody titer data from 87 male seroconverted Fabry patients receiving Fabrazyme. Titers from five clinical trials were collected over 276 weeks of therapy with anti-Fabrazyme IgG titers ranging from 100 to 409,600 after exclusion of seronegative patients. The best model to explain seroconversion was a zero-inflated Poisson (ZIP) model where cumulative dose (under a constant dose regimen of dosing every 2 weeks) influenced the probability of seroconversion. There was an 80% chance of seroconversion when the cumulative dose reached 210 mg (90% confidence interval: 194-226 mg). No difference in antibody titers was noted between Japanese or Western patients. Once seroconverted, antibody titers did not remain constant but decreased in an exponential manner from an initial magnitude to a new lower steady-state value. The expected titer after the new steady-state titer had been achieved was 870 (90% CI: 630-1109). The half-life to the new steady-state value after seroconversion was 44 weeks (90% CI: 17-70 weeks). Time to seroconversion did not appear to be correlated with titer at the time of seroconversion. The method can be adequately used to model antibody titer data.

  1. Marginal regression models for clustered count data based on zero-inflated Conway-Maxwell-Poisson distribution with applications.

    PubMed

    Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath

    2016-06-01

    Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. © 2015, The International Biometric Society.

  2. Zero-Inflated Poisson Modeling of Fall Risk Factors in Community-Dwelling Older Adults.

    PubMed

    Jung, Dukyoo; Kang, Younhee; Kim, Mi Young; Ma, Rye-Won; Bhandari, Pratibha

    2016-02-01

    The aim of this study was to identify risk factors for falls among community-dwelling older adults. The study used a cross-sectional descriptive design. Self-report questionnaires were used to collect data from 658 community-dwelling older adults and were analyzed using logistic and zero-inflated Poisson (ZIP) regression. Perceived health status was a significant factor in the count model, and fall efficacy emerged as a significant predictor in the logistic models. The findings suggest that fall efficacy is important for predicting not only faller and nonfaller status but also fall counts in older adults who may or may not have experienced a previous fall. The fall predictors identified in this study--perceived health status and fall efficacy--indicate the need for fall-prevention programs tailored to address both the physical and psychological issues unique to older adults. © The Author(s) 2014.

  3. A new multivariate zero-adjusted Poisson model with applications to biomedicine.

    PubMed

    Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen

    2018-05-25

    Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Some findings on zero-inflated and hurdle poisson models for disease mapping.

    PubMed

    Corpas-Burgos, Francisca; García-Donato, Gonzalo; Martinez-Beneito, Miguel A

    2018-05-27

    Zero excess in the study of geographically referenced mortality data sets has been the focus of considerable attention in the literature, with zero-inflation being the most common procedure to handle this lack of fit. Although hurdle models have also been used in disease mapping studies, their use is more rare. We show in this paper that models using particular treatments of zero excesses are often required for achieving appropriate fits in regular mortality studies since, otherwise, geographical units with low expected counts are oversmoothed. However, as also shown, an indiscriminate treatment of zero excess may be unnecessary and has a problematic implementation. In this regard, we find that naive zero-inflation and hurdle models, without an explicit modeling of the probabilities of zeroes, do not fix zero excesses problems well enough and are clearly unsatisfactory. Results sharply suggest the need for an explicit modeling of the probabilities that should vary across areal units. Unfortunately, these more flexible modeling strategies can easily lead to improper posterior distributions as we prove in several theoretical results. Those procedures have been repeatedly used in the disease mapping literature, and one should bear these issues in mind in order to propose valid models. We finally propose several valid modeling alternatives according to the results mentioned that are suitable for fitting zero excesses. We show that those proposals fix zero excesses problems and correct the mentioned oversmoothing of risks in low populated units depicting geographic patterns more suited to the data. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Hurdle models for multilevel zero-inflated data via h-likelihood.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2010-12-30

    Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.

  7. Applying the zero-inflated Poisson model with random effects to detect abnormal rises in school absenteeism indicating infectious diseases outbreak.

    PubMed

    Song, X X; Zhao, Q; Tao, T; Zhou, C M; Diwan, V K; Xu, B

    2018-05-30

    Records of absenteeism from primary schools are valuable data for infectious diseases surveillance. However, the analysis of the absenteeism is complicated by the data features of clustering at zero, non-independence and overdispersion. This study aimed to generate an appropriate model to handle the absenteeism data collected in a European Commission granted project for infectious disease surveillance in rural China and to evaluate the validity and timeliness of the resulting model for early warnings of infectious disease outbreak. Four steps were taken: (1) building a 'well-fitting' model by the zero-inflated Poisson model with random effects (ZIP-RE) using the absenteeism data from the first implementation year; (2) applying the resulting model to predict the 'expected' number of absenteeism events in the second implementation year; (3) computing the differences between the observations and the expected values (O-E values) to generate an alternative series of data; (4) evaluating the early warning validity and timeliness of the observational data and model-based O-E values via the EARS-3C algorithms with regard to the detection of real cluster events. The results indicate that ZIP-RE and its corresponding O-E values could improve the detection of aberrations, reduce the false-positive signals and are applicable to the zero-inflated data.

  8. Growth Curve Models for Zero-Inflated Count Data: An Application to Smoking Behavior

    ERIC Educational Resources Information Center

    Liu, Hui; Powers, Daniel A.

    2007-01-01

    This article applies growth curve models to longitudinal count data characterized by an excess of zero counts. We discuss a zero-inflated Poisson regression model for longitudinal data in which the impact of covariates on the initial counts and the rate of change in counts over time is the focus of inference. Basic growth curve models using a…

  9. Variable selection for distribution-free models for longitudinal zero-inflated count responses.

    PubMed

    Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M

    2016-07-20

    Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Modeling health survey data with excessive zero and K responses.

    PubMed

    Lin, Ting Hsiang; Tsai, Min-Hsiao

    2013-04-30

    Zero-inflated Poisson regression is a popular tool used to analyze data with excessive zeros. Although much work has already been performed to fit zero-inflated data, most models heavily depend on special features of the individual data. To be specific, this means that there is a sizable group of respondents who endorse the same answers making the data have peaks. In this paper, we propose a new model with the flexibility to model excessive counts other than zero, and the model is a mixture of multinomial logistic and Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts, including zeros, K (where K is a positive integer) and all other values. The Poisson regression component models the counts that are assumed to follow a Poisson distribution. Two examples are provided to illustrate our models when the data have counts containing many ones and sixes. As a result, the zero-inflated and K-inflated models exhibit a better fit than the zero-inflated Poisson and standard Poisson regressions. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Modeling Tetanus Neonatorum case using the regression of negative binomial and zero-inflated negative binomial

    NASA Astrophysics Data System (ADS)

    Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni

    2017-12-01

    Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.

  12. Spatiotemporal hurdle models for zero-inflated count data: Exploring trends in emergency department visits.

    PubMed

    Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S

    2016-12-01

    Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.

  13. Zero-inflated spatio-temporal models for disease mapping.

    PubMed

    Torabi, Mahmoud

    2017-05-01

    In this paper, our aim is to analyze geographical and temporal variability of disease incidence when spatio-temporal count data have excess zeros. To that end, we consider random effects in zero-inflated Poisson models to investigate geographical and temporal patterns of disease incidence. Spatio-temporal models that employ conditionally autoregressive smoothing across the spatial dimension and B-spline smoothing over the temporal dimension are proposed. The analysis of these complex models is computationally difficult from the frequentist perspective. On the other hand, the advent of the Markov chain Monte Carlo algorithm has made the Bayesian analysis of complex models computationally convenient. Recently developed data cloning method provides a frequentist approach to mixed models that is also computationally convenient. We propose to use data cloning, which yields to maximum likelihood estimation, to conduct frequentist analysis of zero-inflated spatio-temporal modeling of disease incidence. One of the advantages of the data cloning approach is that the prediction and corresponding standard errors (or prediction intervals) of smoothing disease incidence over space and time is easily obtained. We illustrate our approach using a real dataset of monthly children asthma visits to hospital in the province of Manitoba, Canada, during the period April 2006 to March 2010. Performance of our approach is also evaluated through a simulation study. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. On performance of parametric and distribution-free models for zero-inflated and over-dispersed count responses.

    PubMed

    Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M

    2015-10-30

    Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Marginalized zero-inflated negative binomial regression with application to dental caries

    PubMed Central

    Preisser, John S.; Das, Kalyan; Long, D. Leann; Divaris, Kimon

    2015-01-01

    The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared to marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034

  16. Marginalized multilevel hurdle and zero-inflated models for overdispersed and correlated count data with excess zeros.

    PubMed

    Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert

    2014-11-10

    Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is

  17. EM Adaptive LASSO—A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes

    PubMed Central

    Mallick, Himel; Tiwari, Hemant K.

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice

  18. EM Adaptive LASSO-A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes.

    PubMed

    Mallick, Himel; Tiwari, Hemant K

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.

  19. Review and Recommendations for Zero-inflated Count Regression Modeling of Dental Caries Indices in Epidemiological Studies

    PubMed Central

    Stamm, John W.; Long, D. Leann; Kincade, Megan E.

    2012-01-01

    Over the past five to ten years, zero-inflated count regression models have been increasingly applied to the analysis of dental caries indices (e.g., DMFT, dfms, etc). The main reason for that is linked to the broad decline in children’s caries experience, such that dmf and DMF indices more frequently generate low or even zero counts. This article specifically reviews the application of zero-inflated Poisson and zero-inflated negative binomial regression models to dental caries, with emphasis on the description of the models and the interpretation of fitted model results given the study goals. The review finds that interpretations provided in the published caries research are often imprecise or inadvertently misleading, particularly with respect to failing to discriminate between inference for the class of susceptible persons defined by such models and inference for the sampled population in terms of overall exposure effects. Recommendations are provided to enhance the use as well as the interpretation and reporting of results of count regression models when applied to epidemiological studies of dental caries. PMID:22710271

  20. A new approach for handling longitudinal count data with zero-inflation and overdispersion: poisson geometric process model.

    PubMed

    Wan, Wai-Yin; Chan, Jennifer S K

    2009-08-01

    For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).

  1. Zero adjusted models with applications to analysing helminths count data.

    PubMed

    Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N

    2014-11-27

    It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.

  2. Cumulative sum control charts for monitoring geometrically inflated Poisson processes: An application to infectious disease counts data.

    PubMed

    Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E

    2018-02-01

    In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example.

  3. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data

    PubMed Central

    Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172

  4. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.

    PubMed

    Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.

  5. A review on models for count data with extra zeros

    NASA Astrophysics Data System (ADS)

    Zamri, Nik Sarah Nik; Zamzuri, Zamira Hasanah

    2017-04-01

    Typically, the zero inflated models are usually used in modelling count data with excess zeros. The existence of the extra zeros could be structural zeros or random which occur by chance. These types of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences. As found in the literature, the most popular zero inflated models used are zero inflated Poisson and zero inflated negative binomial. Recently, more complex models have been developed to account for overdispersion and unobserved heterogeneity. In addition, more extended distributions are also considered in modelling data with this feature. In this paper, we review related literature, provide a recent development and summary on models for count data with extra zeros.

  6. Does attitude matter in computer use in Australian general practice? A zero-inflated Poisson regression analysis.

    PubMed

    Khan, Asaduzzaman; Western, Mark

    The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.

  7. Marginalized zero-altered models for longitudinal count data.

    PubMed

    Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A

    2016-10-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.

  8. Marginalized zero-altered models for longitudinal count data

    PubMed Central

    Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.

    2015-01-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423

  9. Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.

    PubMed

    Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A

    2017-03-01

    Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.

  10. New variable selection methods for zero-inflated count data with applications to the substance abuse field

    PubMed Central

    Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming

    2011-01-01

    Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207

  11. A time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes with applications in substance abuse research.

    PubMed

    Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne

    2017-02-28

    This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.

    PubMed

    He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L

    2015-10-01

    Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.

  13. Determinants of The Grade A Embryos in Infertile Women; Zero-Inflated Regression Model.

    PubMed

    Almasi-Hashiani, Amir; Ghaheri, Azadeh; Omani Samani, Reza

    2017-10-01

    In assisted reproductive technology, it is important to choose high quality embryos for embryo transfer. The aim of the present study was to determine the grade A embryo count and factors related to it in infertile women. This historical cohort study included 996 infertile women. The main outcome was the number of grade A embryos. Zero-Inflated Poisson (ZIP) regression and Zero-Inflated Negative Binomial (ZINB) regression were used to model the count data as it contained excessive zeros. Stata software, version 13 (Stata Corp, College Station, TX, USA) was used for all statistical analyses. After adjusting for potential confounders, results from the ZINB model show that for each unit increase in the number 2 pronuclear (2PN) zygotes, we get an increase of 1.45 times as incidence rate ratio (95% confidence interval (CI): 1.23-1.69, P=0.001) in the expected grade A embryo count number, and for each increase in the cleavage day we get a decrease 0.35 times (95% CI: 0.20-0.61, P=0.001) in expected grade A embryo count. There is a significant association between both the number of 2PN zygotes and cleavage day with the number of grade A embryos in both ZINB and ZIP regression models. The estimated coefficients are more plausible than values found in earlier studies using less relevant models. Copyright© by Royan Institute. All rights reserved.

  14. Analyzing Propensity Matched Zero-Inflated Count Outcomes in Observational Studies

    PubMed Central

    DeSantis, Stacia M.; Lazaridis, Christos; Ji, Shuang; Spinale, Francis G.

    2013-01-01

    Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching, and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set. PMID:24298197

  15. SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)

    PubMed Central

    Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik

    2013-01-01

    We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172

  16. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  17. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  18. Zero-state Markov switching count-data models: an empirical assessment.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2010-01-01

    In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.

  19. Analyzing hospitalization data: potential limitations of Poisson regression.

    PubMed

    Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R

    2015-08-01

    Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  20. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  1. A Zero- and K-Inflated Mixture Model for Health Questionnaire Data

    PubMed Central

    Finkelman, Matthew D.; Green, Jennifer Greif; Gruber, Michael J.; Zaslavsky, Alan M.

    2011-01-01

    In psychiatric assessment, Item Response Theory (IRT) is a popular tool to formalize the relation between the severity of a disorder and associated responses to questionnaire items. Practitioners of IRT sometimes make the assumption of normally distributed severities within a population; while convenient, this assumption is often violated when measuring psychiatric disorders. Specifically, there may be a sizable group of respondents whose answers place them at an extreme of the latent trait spectrum. In this article, a zero- and K-inflated mixture model is developed to account for the presence of such respondents. The model is fitted using an expectation-maximization (E-M) algorithm to estimate the percentage of the population at each end of the continuum, concurrently analyzing the remaining “graded component” via IRT. A method to perform factor analysis for only the graded component is introduced. In assessments of oppositional defiant disorder and conduct disorder, the zero- and K-inflated model exhibited better fit than the standard IRT model. PMID:21365673

  2. Item Response Modeling of Multivariate Count Data with Zero Inflation, Maximum Inflation, and Heaping

    ERIC Educational Resources Information Center

    Magnus, Brooke E.; Thissen, David

    2017-01-01

    Questionnaires that include items eliciting count responses are becoming increasingly common in psychology. This study proposes methodological techniques to overcome some of the challenges associated with analyzing multivariate item response data that exhibit zero inflation, maximum inflation, and heaping at preferred digits. The modeling…

  3. A Three-dimensional Polymer Scaffolding Material Exhibiting a Zero Poisson's Ratio.

    PubMed

    Soman, Pranav; Fozdar, David Y; Lee, Jin Woo; Phadke, Ameya; Varghese, Shyni; Chen, Shaochen

    2012-05-14

    Poisson's ratio describes the degree to which a material contracts (expands) transversally when axially strained. A material with a zero Poisson's ratio does not transversally deform in response to an axial strain (stretching). In tissue engineering applications, scaffolding having a zero Poisson's ratio (ZPR) may be more suitable for emulating the behavior of native tissues and accommodating and transmitting forces to the host tissue site during wound healing (or tissue regrowth). For example, scaffolding with a zero Poisson's ratio may be beneficial in the engineering of cartilage, ligament, corneal, and brain tissues, which are known to possess Poisson's ratios of nearly zero. Here, we report a 3D biomaterial constructed from polyethylene glycol (PEG) exhibiting in-plane Poisson's ratios of zero for large values of axial strain. We use digital micro-mirror device projection printing (DMD-PP) to create single- and double-layer scaffolds composed of semi re-entrant pores whose arrangement and deformation mechanisms contribute the zero Poisson's ratio. Strain experiments prove the zero Poisson's behavior of the scaffolds and that the addition of layers does not change the Poisson's ratio. Human mesenchymal stem cells (hMSCs) cultured on biomaterials with zero Poisson's ratio demonstrate the feasibility of utilizing these novel materials for biological applications which require little to no transverse deformations resulting from axial strains. Techniques used in this work allow Poisson's ratio to be both scale-independent and independent of the choice of strut material for strains in the elastic regime, and therefore ZPR behavior can be imparted to a variety of photocurable biomaterial.

  4. Zero-Inflated Models for Identifying Relationships Between Body Mass Index and Gastroesophageal Reflux Symptoms: A Nationwide Population-Based Study in China.

    PubMed

    Xu, Qin; Zhang, Wei; Zhang, Tianyi; Zhang, Ruijie; Zhao, Yanfang; Zhang, Yuan; Guo, Yibin; Wang, Rui; Ma, Xiuqiang; He, Jia

    2016-07-01

    That obesity leads to gastroesophageal reflux is a widespread notion. However, scientific evidence for this association is limited, with no rigorous epidemiological approach conducted to address this question. This study examined the relationship between body mass index (BMI) and gastroesophageal reflux symptoms in a large population-representative sample from China. We performed a cross-sectional study in an age- and gender-stratified random sample of the population of five central regions in China. Participants aged 18-80 years completed a general information questionnaire and a Chinese version of the Reflux Disease Questionnaire. The zero-inflated Poisson regression model estimated the relationship between body mass index and gastroesophageal reflux symptoms. Overall, 16,091 (89.4 %) of the 18,000 eligible participants responded. 638 (3.97 %) and 1738 (10.81 %) experienced at least weekly heartburn and weekly acid regurgitation, respectively. After adjusting for potential risk factors in the zero-inflated part, the frequency [odds ratio (OR) 0.66, 95 % confidence interval (95 % CI) 0.50-0.86, p = 0.002] and severity (OR 0.66, 95 % CI 0.50-088, p = 0.004) of heartburn in obese participants were statistically significant compared to those in normal participants. In the Poisson part, the frequency of acid regurgitation, overweight (OR 1.10, 95 % CI 1.01-1.21, p = 0.038) and obesity (OR 1.19, 95 % CI 1.04-1.37, p = 0.013) were statistically significant. BMI was strongly and positively related to the frequency and severity of gastroesophageal reflux symptoms. Additionally, gender exerted strong specific effects on the relationship between BMI and gastroesophageal reflux symptoms. The severity and frequency of heartburn were positively correlated with obesity. This relationship was presented distinct in male participants only.

  5. Empirical null estimation using zero-inflated discrete mixture distributions and its application to protein domain data.

    PubMed

    Gauran, Iris Ivy M; Park, Junyong; Lim, Johan; Park, DoHwan; Zylstra, John; Peterson, Thomas; Kann, Maricel; Spouge, John L

    2017-09-22

    In recent mutation studies, analyses based on protein domain positions are gaining popularity over gene-centric approaches since the latter have limitations in considering the functional context that the position of the mutation provides. This presents a large-scale simultaneous inference problem, with hundreds of hypothesis tests to consider at the same time. This article aims to select significant mutation counts while controlling a given level of Type I error via False Discovery Rate (FDR) procedures. One main assumption is that the mutation counts follow a zero-inflated model in order to account for the true zeros in the count model and the excess zeros. The class of models considered is the Zero-inflated Generalized Poisson (ZIGP) distribution. Furthermore, we assumed that there exists a cut-off value such that smaller counts than this value are generated from the null distribution. We present several data-dependent methods to determine the cut-off value. We also consider a two-stage procedure based on screening process so that the number of mutations exceeding a certain value should be considered as significant mutations. Simulated and protein domain data sets are used to illustrate this procedure in estimation of the empirical null using a mixture of discrete distributions. Overall, while maintaining control of the FDR, the proposed two-stage testing procedure has superior empirical power. 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  6. Observation weights unlock bulk RNA-seq tools for zero inflation and single-cell applications.

    PubMed

    Van den Berge, Koen; Perraudeau, Fanny; Soneson, Charlotte; Love, Michael I; Risso, Davide; Vert, Jean-Philippe; Robinson, Mark D; Dudoit, Sandrine; Clement, Lieven

    2018-02-26

    Dropout events in single-cell RNA sequencing (scRNA-seq) cause many transcripts to go undetected and induce an excess of zero read counts, leading to power issues in differential expression (DE) analysis. This has triggered the development of bespoke scRNA-seq DE methods to cope with zero inflation. Recent evaluations, however, have shown that dedicated scRNA-seq tools provide no advantage compared to traditional bulk RNA-seq tools. We introduce a weighting strategy, based on a zero-inflated negative binomial model, that identifies excess zero counts and generates gene- and cell-specific weights to unlock bulk RNA-seq DE pipelines for zero-inflated data, boosting performance for scRNA-seq.

  7. Multinomial model and zero-inflated gamma model to study time spent on leisure time physical activity: an example of ELSA-Brasil.

    PubMed

    Nobre, Aline Araújo; Carvalho, Marilia Sá; Griep, Rosane Härter; Fonseca, Maria de Jesus Mendes da; Melo, Enirtes Caetano Prates; Santos, Itamar de Souza; Chor, Dora

    2017-08-17

    To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil - Estudo Longitudinal de Saúde do Adulto ) have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week) spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income. The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.

  8. Zero-inflated modeling of fish catch per unit area resulting from multiple gears: Application to channel catfish and shovelnose sturgeon in the Missouri River

    USGS Publications Warehouse

    Arab, A.; Wildhaber, M.L.; Wikle, C.K.; Gentry, C.N.

    2008-01-01

    Fisheries studies often employ multiple gears that result in large percentages of zero values. We considered a zero-inflated Poisson (ZIP) model with random effects to address these excessive zeros. By employing a Bayesian ZIP model that simultaneously incorporates data from multiple gears to analyze data from the Missouri River, we were able to compare gears and make more year, segment, and macrohabitat comparisons than did the original data analysis. For channel catfish Ictalurus punctatus, our results rank (highest to lowest) the mean catch per unit area (CPUA) for gears (beach seine, benthic trawl, electrofishing, and drifting trammel net); years (1998 and 1997); macrohabitats (tributary mouth, connected secondary channel, nonconnected secondary channel, and bend); and river segment zones (channelized, inter-reservoir, and least-altered). For shovelnose sturgeon Scaphirhynchus platorynchus, the mean CPUA was significantly higher for benthic trawls and drifting trammel nets; 1998 and 1997; tributary mouths, bends, and connected secondary channels; and some channelized or least-altered inter-reservoir segments. One important advantage of our approach is the ability to reliably infer patterns of relative abundance by means of multiple gears without using gear efficiencies. ?? Copyright by the American Fisheries Society 2008.

  9. A Spatial Poisson Hurdle Model for Exploring Geographic Variation in Emergency Department Visits

    PubMed Central

    Neelon, Brian; Ghosh, Pulak; Loebs, Patrick F.

    2012-01-01

    Summary We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina. PMID:23543242

  10. Comparing statistical methods for analyzing skewed longitudinal count data with many zeros: an example of smoking cessation.

    PubMed

    Xie, Haiyi; Tao, Jill; McHugo, Gregory J; Drake, Robert E

    2013-07-01

    Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Zero inflation in ordinal data: Incorporating susceptibility to response through the use of a mixture model

    PubMed Central

    Kelley, Mary E.; Anderson, Stewart J.

    2008-01-01

    Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711

  12. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  13. Disease Mapping of Zero-excessive Mesothelioma Data in Flanders

    PubMed Central

    Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel

    2016-01-01

    Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590

  14. Disease mapping of zero-excessive mesothelioma data in Flanders.

    PubMed

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel

    2017-01-01

    To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Modeling number of claims and prediction of total claim amount

    NASA Astrophysics Data System (ADS)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  16. Zero-truncated panel Poisson mixture models: Estimating the impact on tourism benefits in Fukushima Prefecture.

    PubMed

    Narukawa, Masaki; Nohara, Katsuhito

    2018-04-01

    This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Simulation on Poisson and negative binomial models of count road accident modeling

    NASA Astrophysics Data System (ADS)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  18. Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series

    NASA Astrophysics Data System (ADS)

    Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.

    2018-03-01

    Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.

  19. A tutorial on count regression and zero-altered count models for longitudinal substance use data

    PubMed Central

    Atkins, David C.; Baldwin, Scott A.; Zheng, Cheng; Gallop, Robert J.; Neighbors, Clayton

    2012-01-01

    Critical research questions in the study of addictive behaviors concern how these behaviors change over time - either as the result of intervention or in naturalistic settings. The combination of count outcomes that are often strongly skewed with many zeroes (e.g., days using, number of total drinks, number of drinking consequences) with repeated assessments (e.g., longitudinal follow-up after intervention or daily diary data) present challenges for data analyses. The current article provides a tutorial on methods for analyzing longitudinal substance use data, focusing on Poisson, zero-inflated, and hurdle mixed models, which are types of hierarchical or multilevel models. Two example datasets are used throughout, focusing on drinking-related consequences following an intervention and daily drinking over the past 30 days, respectively. Both datasets as well as R, SAS, Mplus, Stata, and SPSS code showing how to fit the models are available on a supplemental website. PMID:22905895

  20. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts.

    PubMed

    Preisser, John S; Long, D Leann; Stamm, John W

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.

  1. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts

    PubMed Central

    Preisser, John S.; Long, D. Leann; Stamm, John W.

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962

  2. Bayesian spatiotemporal analysis of zero-inflated biological population density data by a delta-normal spatiotemporal additive model.

    PubMed

    Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco

    2016-03-01

    We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. MIXED MODEL AND ESTIMATING EQUATION APPROACHES FOR ZERO INFLATION IN CLUSTERED BINARY RESPONSE DATA WITH APPLICATION TO A DATING VIOLENCE STUDY1

    PubMed Central

    Fulton, Kara A.; Liu, Danping; Haynie, Denise L.; Albert, Paul S.

    2016-01-01

    The NEXT Generation Health study investigates the dating violence of adolescents using a survey questionnaire. Each student is asked to affirm or deny multiple instances of violence in his/her dating relationship. There is, however, evidence suggesting that students not in a relationship responded to the survey, resulting in excessive zeros in the responses. This paper proposes likelihood-based and estimating equation approaches to analyze the zero-inflated clustered binary response data. We adopt a mixed model method to account for the cluster effect, and the model parameters are estimated using a maximum-likelihood (ML) approach that requires a Gaussian–Hermite quadrature (GHQ) approximation for implementation. Since an incorrect assumption on the random effects distribution may bias the results, we construct generalized estimating equations (GEE) that do not require the correct specification of within-cluster correlation. In a series of simulation studies, we examine the performance of ML and GEE methods in terms of their bias, efficiency and robustness. We illustrate the importance of properly accounting for this zero inflation by reanalyzing the NEXT data where this issue has previously been ignored. PMID:26937263

  4. Poisson's ratio from polarization of acoustic zero-group velocity Lamb mode.

    PubMed

    Baggens, Oskar; Ryden, Nils

    2015-07-01

    Poisson's ratio of an isotropic and free elastic plate is estimated from the polarization of the first symmetric acoustic zero-group velocity Lamb mode. This polarization is interpreted as the ratio of the absolute amplitudes of the surface normal and surface in-plane components of the acoustic mode. Results from the evaluation of simulated datasets indicate that the presented relation, which links the polarization and Poisson's ratio, can be extended to incorporate plates with material damping. Furthermore, the proposed application of the polarization is demonstrated in a practical field case, where an increased accuracy of estimated nominal thickness is obtained.

  5. Some considerations for excess zeroes in substance abuse research.

    PubMed

    Bandyopadhyay, Dipankar; DeSantis, Stacia M; Korte, Jeffrey E; Brady, Kathleen T

    2011-09-01

    Count data collected in substance abuse research often come with an excess of "zeroes," which are typically handled using zero-inflated regression models. However, there is a need to consider the design aspects of those studies before using such a statistical model to ascertain the sources of zeroes. We sought to illustrate hurdle models as alternatives to zero-inflated models to validate a two-stage decision-making process in situations of "excess zeroes." We use data from a study of 45 cocaine-dependent subjects where the primary scientific question was to evaluate whether study participation influences drug-seeking behavior. The outcome, "the frequency (count) of cocaine use days per week," is bounded (ranging from 0 to 7). We fit and compare binomial, Poisson, negative binomial, and the hurdle version of these models to study the effect of gender, age, time, and study participation on cocaine use. The hurdle binomial model provides the best fit. Gender and time are not predictive of use. Higher odds of use versus no use are associated with age; however once use is experienced, odds of further use decrease with increase in age. Participation was associated with higher odds of no-cocaine use; once there is use, participation reduced the odds of further use. Age and study participation are significantly predictive of cocaine-use behavior. The two-stage decision process as modeled by a hurdle binomial model (appropriate for bounded count data with excess zeroes) provides interesting insights into the study of covariate effects on count responses of substance use, when all enrolled subjects are believed to be "at-risk" of use.

  6. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    PubMed

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  7. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  8. A comparison of different statistical methods analyzing hypoglycemia data using bootstrap simulations.

    PubMed

    Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory

    2015-01-01

    Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.

  9. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    PubMed

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  10. Analysis of overdispersed count data: application to the Human Papillomavirus Infection in Men (HIM) Study.

    PubMed

    Lee, J-H; Han, G; Fulp, W J; Giuliano, A R

    2012-06-01

    The Poisson model can be applied to the count of events occurring within a specific time period. The main feature of the Poisson model is the assumption that the mean and variance of the count data are equal. However, this equal mean-variance relationship rarely occurs in observational data. In most cases, the observed variance is larger than the assumed variance, which is called overdispersion. Further, when the observed data involve excessive zero counts, the problem of overdispersion results in underestimating the variance of the estimated parameter, and thus produces a misleading conclusion. We illustrated the use of four models for overdispersed count data that may be attributed to excessive zeros. These are Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial models. The example data in this article deal with the number of incidents involving human papillomavirus infection. The four models resulted in differing statistical inferences. The Poisson model, which is widely used in epidemiology research, underestimated the standard errors and overstated the significance of some covariates.

  11. Three-dimensionally bonded spongy graphene material with super compressive elasticity and near-zero Poisson's ratio.

    PubMed

    Wu, Yingpeng; Yi, Ningbo; Huang, Lu; Zhang, Tengfei; Fang, Shaoli; Chang, Huicong; Li, Na; Oh, Jiyoung; Lee, Jae Ah; Kozlov, Mikhail; Chipara, Alin C; Terrones, Humberto; Xiao, Peishuang; Long, Guankui; Huang, Yi; Zhang, Fan; Zhang, Long; Lepró, Xavier; Haines, Carter; Lima, Márcio Dias; Lopez, Nestor Perea; Rajukumar, Lakshmy P; Elias, Ana L; Feng, Simin; Kim, Seon Jeong; Narayanan, N T; Ajayan, Pulickel M; Terrones, Mauricio; Aliev, Ali; Chu, Pengfei; Zhang, Zhong; Baughman, Ray H; Chen, Yongsheng

    2015-01-20

    It is a challenge to fabricate graphene bulk materials with properties arising from the nature of individual graphene sheets, and which assemble into monolithic three-dimensional structures. Here we report the scalable self-assembly of randomly oriented graphene sheets into additive-free, essentially homogenous graphene sponge materials that provide a combination of both cork-like and rubber-like properties. These graphene sponges, with densities similar to air, display Poisson's ratios in all directions that are near-zero and largely strain-independent during reversible compression to giant strains. And at the same time, they function as enthalpic rubbers, which can recover up to 98% compression in air and 90% in liquids, and operate between -196 and 900 °C. Furthermore, these sponges provide reversible liquid absorption for hundreds of cycles and then discharge it within seconds, while still providing an effective near-zero Poisson's ratio.

  12. Three-part joint modeling methods for complex functional data mixed with zero-and-one-inflated proportions and zero-inflated continuous outcomes with skewness.

    PubMed

    Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J

    2018-02-20

    We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in

  13. Multivariate random-parameters zero-inflated negative binomial regression model: an application to estimate crash frequencies at intersections.

    PubMed

    Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan

    2014-09-01

    Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Persistently Auxetic Materials: Engineering the Poisson Ratio of 2D Self-Avoiding Membranes under Conditions of Non-Zero Anisotropic Strain.

    PubMed

    Ulissi, Zachary W; Govind Rajan, Ananth; Strano, Michael S

    2016-08-23

    Entropic surfaces represented by fluctuating two-dimensional (2D) membranes are predicted to have desirable mechanical properties when unstressed, including a negative Poisson's ratio ("auxetic" behavior). Herein, we present calculations of the strain-dependent Poisson ratio of self-avoiding 2D membranes demonstrating desirable auxetic properties over a range of mechanical strain. Finite-size membranes with unclamped boundary conditions have positive Poisson's ratio due to spontaneous non-zero mean curvature, which can be suppressed with an explicit bending rigidity in agreement with prior findings. Applying longitudinal strain along a singular axis to this system suppresses this mean curvature and the entropic out-of-plane fluctuations, resulting in a molecular-scale mechanism for realizing a negative Poisson's ratio above a critical strain, with values significantly more negative than the previously observed zero-strain limit for infinite sheets. We find that auxetic behavior persists over surprisingly high strains of more than 20% for the smallest surfaces, with desirable finite-size scaling producing surfaces with negative Poisson's ratio over a wide range of strains. These results promise the design of surfaces and composite materials with tunable Poisson's ratio by prestressing platelet inclusions or controlling the surface rigidity of a matrix of 2D materials.

  15. A quantile count model of water depth constraints on Cape Sable seaside sparrows

    USGS Publications Warehouse

    Cade, B.S.; Dong, Q.

    2008-01-01

    1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.

  16. Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates

    USGS Publications Warehouse

    Gray, B.R.

    2005-01-01

    The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively

  17. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany

    PubMed Central

    Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun

    2017-01-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498

  18. Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Suvodip; Das, Santanu; Souradeep, Tarun

    2015-01-01

    Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass m{sub eff} for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independentmore » parameters, namely spectral index for tensor perturbation ν{sub t} and change in spectral index for scalar perturbation ν{sub st} to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of n{sub s}=0.96 by having a non-zero value of effective mass of the inflaton field m{sup 2}{sub eff}/H{sup 2}. The analysis with WP + Planck likelihood shows a non-zero detection of m{sup 2}{sub eff}/H{sup 2} with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m{sup 2}{sub eff}/H{sup 2} = −0.0237 ± 0.0135 which is consistent with zero.« less

  19. Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements

    NASA Astrophysics Data System (ADS)

    Mukherjee, Suvodip; Das, Santanu; Joy, Minu; Souradeep, Tarun

    2015-01-01

    Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass meff for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation νt and change in spectral index for scalar perturbation νst to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of ns=0.96 by having a non-zero value of effective mass of the inflaton field m2eff/H2. The analysis with WP + Planck likelihood shows a non-zero detection of m2eff/H2 with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m2eff/H2 = -0.0237 ± 0.0135 which is consistent with zero.

  20. On the Singularity of the Vlasov-Poisson System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  1. On the singularity of the Vlasov-Poisson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Jian; Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08550

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  2. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.

    PubMed

    Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun

    2015-09-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Evaluation of the Use of Zero-Augmented Regression Techniques to Model Incidence of Campylobacter Infections in FoodNet.

    PubMed

    Tremblay, Marlène; Crim, Stacy M; Cole, Dana J; Hoekstra, Robert M; Henao, Olga L; Döpfer, Dörte

    2017-10-01

    The Foodborne Diseases Active Surveillance Network (FoodNet) is currently using a negative binomial (NB) regression model to estimate temporal changes in the incidence of Campylobacter infection. FoodNet active surveillance in 483 counties collected data on 40,212 Campylobacter cases between years 2004 and 2011. We explored models that disaggregated these data to allow us to account for demographic, geographic, and seasonal factors when examining changes in incidence of Campylobacter infection. We hypothesized that modeling structural zeros and including demographic variables would increase the fit of FoodNet's Campylobacter incidence regression models. Five different models were compared: NB without demographic covariates, NB with demographic covariates, hurdle NB with covariates in the count component only, hurdle NB with covariates in both zero and count components, and zero-inflated NB with covariates in the count component only. Of the models evaluated, the nonzero-augmented NB model with demographic variables provided the best fit. Results suggest that even though zero inflation was not present at this level, individualizing the level of aggregation and using different model structures and predictors per site might be required to correctly distinguish between structural and observational zeros and account for risk factors that vary geographically.

  4. An application of a zero-inflated lifetime distribution with multiple and incomplete data sources

    DOE PAGES

    Hamada, M. S.; Margevicius, K. J.

    2016-02-11

    In this study, we analyze data sampled from a population of parts in which an associated anomaly can occur at assembly or after assembly. Using a zero-inflated lifetime distribution to fit left-censored and right-censored data as well data from a supplementary sample, we make predictions about the proportion of the population with anomalies today and in the future. Goodness-of-fit is also addressed.

  5. A smooth exit from eternal inflation?

    NASA Astrophysics Data System (ADS)

    Hawking, S. W.; Hertog, Thomas

    2018-04-01

    The usual theory of inflation breaks down in eternal inflation. We derive a dual description of eternal inflation in terms of a deformed Euclidean CFT located at the threshold of eternal inflation. The partition function gives the amplitude of different geometries of the threshold surface in the no-boundary state. Its local and global behavior in dual toy models shows that the amplitude is low for surfaces which are not nearly conformal to the round three-sphere and essentially zero for surfaces with negative curvature. Based on this we conjecture that the exit from eternal inflation does not produce an infinite fractal-like multiverse, but is finite and reasonably smooth.

  6. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  7. Relaxed Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Quasiopen inflation

    NASA Astrophysics Data System (ADS)

    García-Bellido, Juan; Garriga, Jaume; Montes, Xavier

    1998-04-01

    We show that a large class of two-field models of single-bubble open inflation does not lead to infinite open universes, as was previously thought, but to an ensemble of very large but finite inflating ``islands.'' The reason is that the quantum tunneling responsible for the nucleation of the bubble does not occur simultaneously along both field directions and equal-time hypersurfaces in the open universe are not synchronized with equal-density or fixed-field hypersurfaces. The most probable tunneling trajectory corresponds to a zero value of the inflaton field; large values, necessary for the second period of inflation inside the bubble, only arise as localized fluctuations. The interior of each nucleated bubble will contain an infinite number of such inflating regions of comoving size of order γ-1, where γ is the supercurvature eigenvalue, which depends on the parameters of the model. Each one of these islands will be a quasi-open universe. Since the volume of the hyperboloid is infinite, inflating islands with all possible values of the field at their center will be realized inside of a single bubble. We may happen to live in one of those patches of comoving size d<~γ-1, where the universe appears to be open. In particular, we consider the ``supernatural'' model proposed by Linde and Mezhlumian. There, an approximate U(1) symmetry is broken by a tunneling field in a first order phase transition, and slow-roll inflation inside the nucleated bubble is driven by the pseudo Goldstone field. We find that the excitations of the pseudo Goldstone field produced by the nucleation and subsequent expansion of the bubble place severe constraints on this model. We also discuss the coupled and uncoupled two-field models.

  9. The effect of a major cigarette price change on smoking behavior in california: a zero-inflated negative binomial model.

    PubMed

    Sheu, Mei-Ling; Hu, Teh-Wei; Keeler, Theodore E; Ong, Michael; Sung, Hai-Yen

    2004-08-01

    The objective of this paper is to determine the price sensitivity of smokers in their consumption of cigarettes, using evidence from a major increase in California cigarette prices due to Proposition 10 and the Tobacco Settlement. The study sample consists of individual survey data from Behavioral Risk Factor Survey (BRFS) and price data from the Bureau of Labor Statistics between 1996 and 1999. A zero-inflated negative binomial (ZINB) regression model was applied for the statistical analysis. The statistical model showed that price did not have an effect on reducing the estimated prevalence of smoking. However, it indicated that among smokers the price elasticity was at the level of -0.46 and statistically significant. Since smoking prevalence is significantly lower than it was a decade ago, price increases are becoming less effective as an inducement for hard-core smokers to quit, although they may respond by decreasing consumption. For those who only smoke occasionally (many of them being young adults) price increases alone may not be an effective inducement to quit smoking. Additional underlying behavioral factors need to be identified so that more effective anti-smoking strategies can be developed.

  10. Regular exercise and related factors in patients with Parkinson's disease: Applying zero-inflated negative binomial modeling of exercise count data.

    PubMed

    Lee, JuHee; Park, Chang Gi; Choi, Moonki

    2016-05-01

    This study was conducted to identify risk factors that influence regular exercise among patients with Parkinson's disease in Korea. Parkinson's disease is prevalent in the elderly, and may lead to a sedentary lifestyle. Exercise can enhance physical and psychological health. However, patients with Parkinson's disease are less likely to exercise than are other populations due to physical disability. A secondary data analysis and cross-sectional descriptive study were conducted. A convenience sample of 106 patients with Parkinson's disease was recruited at an outpatient neurology clinic of a tertiary hospital in Korea. Demographic characteristics, disease-related characteristics (including disease duration and motor symptoms), self-efficacy for exercise, balance, and exercise level were investigated. Negative binomial regression and zero-inflated negative binomial regression for exercise count data were utilized to determine factors involved in exercise. The mean age of participants was 65.85 ± 8.77 years, and the mean duration of Parkinson's disease was 7.23 ± 6.02 years. Most participants indicated that they engaged in regular exercise (80.19%). Approximately half of participants exercised at least 5 days per week for 30 min, as recommended (51.9%). Motor symptoms were a significant predictor of exercise in the count model, and self-efficacy for exercise was a significant predictor of exercise in the zero model. Severity of motor symptoms was related to frequency of exercise. Self-efficacy contributed to the probability of exercise. Symptom management and improvement of self-efficacy for exercise are important to encourage regular exercise in patients with Parkinson's disease. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Accident prediction model for public highway-rail grade crossings.

    PubMed

    Lu, Pan; Tolliver, Denver

    2016-05-01

    Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Inflation model selection meets dark radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tram, Thomas; Vallance, Robert; Vennin, Vincent, E-mail: thomas.tram@port.ac.uk, E-mail: robert.vallance@student.manchester.ac.uk, E-mail: vincent.vennin@port.ac.uk

    2017-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard ΛCDM model and an extension including dark radiation parametrised by its effective number of relativistic species N {sub eff}. Using a minimal dataset (Planck low-ℓ polarisation, temperature power spectrum and lensing reconstruction), we find that the observational status of most inflationary models is unchanged. The exceptionsmore » are potentials such as power-law inflation that predict large values for the scalar spectral index that can only be realised when N {sub eff} is allowed to vary. Adding baryon acoustic oscillations data and the B-mode data from BICEP2/Keck makes power-law inflation disfavoured, while adding local measurements of the Hubble constant H {sub 0} makes power-law inflation slightly favoured compared to the best single-field plateau potentials. This illustrates how the dark radiation solution to the H {sub 0} tension would have deep consequences for inflation model selection.« less

  13. Zero-truncated negative binomial - Erlang distribution

    NASA Astrophysics Data System (ADS)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  14. Modeling zero-modified count and semicontinuous data in health services research Part 1: background and overview.

    PubMed

    Neelon, Brian; O'Malley, A James; Smith, Valerie A

    2016-11-30

    Health services data often contain a high proportion of zeros. In studies examining patient hospitalization rates, for instance, many patients will have no hospitalizations, resulting in a count of zero. When the number of zeros is greater or less than expected under a standard count model, the data are said to be zero modified relative to the standard model. A similar phenomenon arises with semicontinuous data, which are characterized by a spike at zero followed by a continuous distribution with positive support. When analyzing zero-modified count and semicontinuous data, flexible mixture distributions are often needed to accommodate both the excess zeros and the typically skewed distribution of nonzero values. Various models have been introduced over the past three decades to accommodate such data, including hurdle models, zero-inflated models, and two-part semicontinuous models. This tutorial describes recent modeling strategies for zero-modified count and semicontinuous data and highlights their role in health services research studies. Part 1 of the tutorial, presented here, provides a general overview of the topic. Part 2, appearing as a companion piece in this issue of Statistics in Medicine, discusses three case studies illustrating applications of the methods to health services research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Inflation in the standard cosmological model

    NASA Astrophysics Data System (ADS)

    Uzan, Jean-Philippe

    2015-12-01

    The inflationary paradigm is now part of the standard cosmological model as a description of its primordial phase. While its original motivation was to solve the standard problems of the hot big bang model, it was soon understood that it offers a natural theory for the origin of the large-scale structure of the universe. Most models rely on a slow-rolling scalar field and enjoy very generic predictions. Besides, all the matter of the universe is produced by the decay of the inflaton field at the end of inflation during a phase of reheating. These predictions can be (and are) tested from their imprint of the large-scale structure and in particular the cosmic microwave background. Inflation stands as a window in physics where both general relativity and quantum field theory are at work and which can be observationally studied. It connects cosmology with high-energy physics. Today most models are constructed within extensions of the standard model, such as supersymmetry or string theory. Inflation also disrupts our vision of the universe, in particular with the ideas of chaotic inflation and eternal inflation that tend to promote the image of a very inhomogeneous universe with fractal structure on a large scale. This idea is also at the heart of further speculations, such as the multiverse. This introduction summarizes the connections between inflation and the hot big bang model and details the basics of its dynamics and predictions. xml:lang="fr"

  16. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Homogeneous cosmological models and new inflation

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.; Widrow, Lawrence M.

    1986-01-01

    The promise of the inflationary-universe scenario is to free the present state of the universe from extreme dependence upon initial data. Paradoxically, inflation is usually analyzed in the context of the homogeneous and isotropic Robertson-Walker cosmological models. It is shown that all but a small subset of the homogeneous models undergo inflation. Any initial anisotropy is so strongly damped that if sufficient inflation occurs to solve the flatness and horizon problems, the universe today would still be very isotropic.

  18. Toward inflation models compatible with the no-boundary proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Dong-il; Yeom, Dong-han, E-mail: dongil.j.hwang@gmail.com, E-mail: innocent.yeom@gmail.com

    2014-06-01

    In this paper, we investigate various inflation models in the context of the no-boundary proposal. We propose that a good inflation model should satisfy three conditions: observational constraints, plausible initial conditions, and naturalness of the model. For various inflation models, we assign the probability to each initial condition using the no-boundary proposal and define a quantitative standard, typicality, to check whether the model satisfies the observational constraints with probable initial conditions. There are three possible ways to satisfy the typicality criterion: there was pre-inflation near the high energy scale, the potential is finely tuned or the inflationary field space ismore » unbounded, or there are sufficient number of fields that contribute to inflation. The no-boundary proposal rejects some of naive inflation models, explains some of traditional doubts on inflation, and possibly, can have observational consequences.« less

  19. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  20. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  1. Modeling zero-modified count and semicontinuous data in health services research part 2: case studies.

    PubMed

    Neelon, Brian; O'Malley, A James; Smith, Valerie A

    2016-11-30

    This article is the second installment of a two-part tutorial on the analysis of zero-modified count and semicontinuous data. Part 1, which appears as a companion piece in this issue of Statistics in Medicine, provides a general background and overview of the topic, with particular emphasis on applications to health services research. Here, we present three case studies highlighting various approaches for the analysis of zero-modified data. The first case study describes methods for analyzing zero-inflated longitudinal count data. Case study 2 considers the use of hurdle models for the analysis of spatiotemporal count data. The third case study discusses an application of marginalized two-part models to the analysis of semicontinuous health expenditure data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Fractional poisson--a simple dose-response model for human norovirus.

    PubMed

    Messner, Michael J; Berger, Philip; Nappier, Sharon P

    2014-10-01

    This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.

  3. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  4. Comments on SUSY Inflation Models on the Brane

    NASA Astrophysics Data System (ADS)

    Lee, Lu-Yun; Cheung, Kingman; Lin, Chia-Min

    In this paper we consider a class of inflation models on the brane where the dominant part of the inflaton scalar potential does not depend on the inflaton field value during inflation. In particular, we consider supernatural inflation, its hilltop version, A-term inflation, and supersymmetric (SUSY) D- and F-term hybrid inflation on the brane. We show that the parameter space can be broadened, the inflation scale generally can be lowered, and still possible to have the spectral index ns = 0.96.

  5. Genericness of inflation in isotropic loop quantum cosmology.

    PubMed

    Date, Ghanashyam; Hossain, Golam Mortuza

    2005-01-14

    Nonperturbative corrections from loop quantum cosmology (LQC) to the scalar matter sector are already known to imply inflation. We prove that the LQC modified scalar field generates exponential inflation in the small scale factor regime, for all positive definite potentials, independent of initial conditions and independent of ambiguity parameters. For positive semidefinite potentials it is always possible to choose, without fine-tuning, a value of one of the ambiguity parameters such that exponential inflation results, provided zeros of the potential are approached at most as a power law in the scale factor. In conjunction with the generic occurrence of bounce at small volumes, particle horizon is absent, thus eliminating the horizon problem of the standard big bang model.

  6. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  7. Inflation in the mixed Higgs-R2 model

    NASA Astrophysics Data System (ADS)

    He, Minxi; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2018-05-01

    We analyze a two-field inflationary model consisting of the Ricci scalar squared (R2) term and the standard Higgs field non-minimally coupled to gravity in addition to the Einstein R term. Detailed analysis of the power spectrum of this model with mass hierarchy is presented, and we find that one can describe this model as an effective single-field model in the slow-roll regime with a modified sound speed. The scalar spectral index predicted by this model coincides with those given by the R2 inflation and the Higgs inflation implying that there is a close relation between this model and the R2 inflation already in the original (Jordan) frame. For a typical value of the self-coupling of the standard Higgs field at the high energy scale of inflation, the role of the Higgs field in parameter space involved is to modify the scalaron mass, so that the original mass parameter in the R2 inflation can deviate from its standard value when non-minimal coupling between the Ricci scalar and the Higgs field is large enough.

  8. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  9. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  10. A simple, approximate model of parachute inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macha, J.M.

    1992-11-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less

  11. A simple, approximate model of parachute inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macha, J.M.

    1992-01-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less

  12. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  13. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  14. Variable- and Person-Centered Approaches to the Analysis of Early Adolescent Substance Use: Linking Peer, Family, and Intervention Effects with Developmental Trajectories

    ERIC Educational Resources Information Center

    Connell, Arin M.; Dishion, Thomas J.; Deater-Deckard, Kirby

    2006-01-01

    This 4-year study of 698 young adolescents examined the covariates of early onset substance use from Grade 6 through Grade 9. The youth were randomly assigned to a family-centered Adolescent Transitions Program (ATP) condition. Variable-centered (zero-inflated Poisson growth model) and person-centered (latent growth mixture model) approaches were…

  15. Model with two periods of inflation

    NASA Astrophysics Data System (ADS)

    Schettler, Simon; Schaffner-Bielich, Jürgen

    2016-01-01

    A scenario with two subsequent periods of inflationary expansion in the very early Universe is examined. The model is based on a potential motivated by symmetries being found in field theory at high energy. For various parameter sets of the potential, the spectra of scalar and tensor perturbations that are expected to originate from this scenario are calculated. Also the beginning of the reheating epoch connecting the second inflation with thermal equilibrium is studied. Perturbations with wavelengths leaving the horizon around the transition between the two inflations are special: It is demonstrated that the power spectrum at such scales deviates significantly from expectations based on measurements of the cosmic microwave background. This supports the conclusion that parameters for which this part of the spectrum leaves observable traces in the cosmic microwave background must be excluded. Parameters entailing a very efficient second inflation correspond to standard small-field inflation and can meet observational constraints. Particular attention is paid to the case where the second inflation leads solely to a shift of the observable spectrum from the first inflation. A viable scenario requires this shift to be small.

  16. Global embedding of fibre inflation models

    NASA Astrophysics Data System (ADS)

    Cicoli, Michele; Muia, Francesco; Shukla, Pramod

    2016-11-01

    We present concrete embeddings of fibre inflation models in globally consistent type IIB Calabi-Yau orientifolds with closed string moduli stabilisation. After performing a systematic search through the existing list of toric Calabi-Yau manifolds, we find several examples that reproduce the minimal setup to embed fibre inflation models. This involves Calabi-Yau manifolds with h 1,1 = 3 which are K3 fibrations over a ℙ1 base with an additional shrinkable rigid divisor. We then provide different consistent choices of the underlying brane set-up which generate a non-perturbative superpotential suitable for moduli stabilisation and string loop corrections with the correct form to drive inflation. For each Calabi-Yau orientifold setting, we also compute the effect of higher derivative contributions and study their influence on the inflationary dynamics.

  17. Bayesian analysis of zero inflated spatiotemporal HIV/TB child mortality data through the INLA and SPDE approaches: Applied to data observed between 1992 and 2010 in rural North East South Africa

    NASA Astrophysics Data System (ADS)

    Musenge, Eustasius; Chirwa, Tobias Freeman; Kahn, Kathleen; Vounatsou, Penelope

    2013-06-01

    Longitudinal mortality data with few deaths usually have problems of zero-inflation. This paper presents and applies two Bayesian models which cater for zero-inflation, spatial and temporal random effects. To reduce the computational burden experienced when a large number of geo-locations are treated as a Gaussian field (GF) we transformed the field to a Gaussian Markov Random Fields (GMRF) by triangulation. We then modelled the spatial random effects using the Stochastic Partial Differential Equations (SPDEs). Inference was done using a computationally efficient alternative to Markov chain Monte Carlo (MCMC) called Integrated Nested Laplace Approximation (INLA) suited for GMRF. The models were applied to data from 71,057 children aged 0 to under 10 years from rural north-east South Africa living in 15,703 households over the years 1992-2010. We found protective effects on HIV/TB mortality due to greater birth weight, older age and more antenatal clinic visits during pregnancy (adjusted RR (95% CI)): 0.73(0.53;0.99), 0.18(0.14;0.22) and 0.96(0.94;0.97) respectively. Therefore childhood HIV/TB mortality could be reduced if mothers are better catered for during pregnancy as this can reduce mother-to-child transmissions and contribute to improved birth weights. The INLA and SPDE approaches are computationally good alternatives in modelling large multilevel spatiotemporal GMRF data structures.

  18. The field-space metric in spiral inflation and related models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erlich, Joshua; Olsen, Jackson; Wang, Zhen

    2016-09-22

    Multi-field inflation models include a variety of scenarios for how inflation proceeds and ends. Models with the same potential but different kinetic terms are common in the literature. We compare spiral inflation and Dante’s inferno-type models, which differ only in their field-space metric. We justify a single-field effective description in these models and relate the single-field description to a mass-matrix formalism. We note the effects of the nontrivial field-space metric on inflationary observables, and consequently on the viability of these models. We also note a duality between spiral inflation and Dante’s inferno models with different potentials.

  19. A poisson process model for hip fracture risk.

    PubMed

    Schechner, Zvi; Luo, Gangming; Kaufman, Jonathan J; Siffert, Robert S

    2010-08-01

    The primary method for assessing fracture risk in osteoporosis relies primarily on measurement of bone mass. Estimation of fracture risk is most often evaluated using logistic or proportional hazards models. Notwithstanding the success of these models, there is still much uncertainty as to who will or will not suffer a fracture. This has led to a search for other components besides mass that affect bone strength. The purpose of this paper is to introduce a new mechanistic stochastic model that characterizes the risk of hip fracture in an individual. A Poisson process is used to model the occurrence of falls, which are assumed to occur at a rate, lambda. The load induced by a fall is assumed to be a random variable that has a Weibull probability distribution. The combination of falls together with loads leads to a compound Poisson process. By retaining only those occurrences of the compound Poisson process that result in a hip fracture, a thinned Poisson process is defined that itself is a Poisson process. The fall rate is modeled as an affine function of age, and hip strength is modeled as a power law function of bone mineral density (BMD). The risk of hip fracture can then be computed as a function of age and BMD. By extending the analysis to a Bayesian framework, the conditional densities of BMD given a prior fracture and no prior fracture can be computed and shown to be consistent with clinical observations. In addition, the conditional probabilities of fracture given a prior fracture and no prior fracture can also be computed, and also demonstrate results similar to clinical data. The model elucidates the fact that the hip fracture process is inherently random and improvements in hip strength estimation over and above that provided by BMD operate in a highly "noisy" environment and may therefore have little ability to impact clinical practice.

  20. Dark energy from gravitoelectromagnetic inflation?

    NASA Astrophysics Data System (ADS)

    Membiela, F. A.; Bellini, M.

    2008-02-01

    Gravitoectromagnetic Inflation (GI) was introduced to describe in an unified manner, electromagnetic, gravitatory and inflaton fields from a 5D vacuum state. On the other hand, the primordial origin and evolution of dark energy is today unknown. In this letter we show using GI that the zero modes of some redefined vector fields $B_i=A_i/a$ produced during inflation, could be the source of dark energy in the universe.

  1. Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.

    PubMed

    Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth

    2015-01-01

    This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.

  2. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Constraints on texture zero and cofactor zero models for neutrino mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whisnant, K.; Liao, Jiajun; Marfatia, D.

    2014-06-24

    Imposing a texture or cofactor zero on the neutrino mass matrix reduces the number of independent parameters from nine to seven. Since five parameters have been measured, only two independent parameters would remain in such models. We find the allowed regions for single texture zero and single cofactor zero models. We also find strong similarities between single texture zero models with one mass hierarchy and single cofactor zero models with the opposite mass hierarchy. We show that this correspondence can be generalized to texture-zero and cofactor-zero models with the same homogeneous costraints on the elements and cofactors.

  4. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  5. Poisson mixture model for measurements using counting.

    PubMed

    Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz

    2010-03-01

    Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.

  6. Leptospirosis disease mapping with standardized morbidity ratio and Poisson-Gamma model: An analysis of Leptospirosis disease in Kelantan, Malaysia

    NASA Astrophysics Data System (ADS)

    Che Awang, Aznida; Azah Samat, Nor

    2017-09-01

    Leptospirosis is a disease caused by the infection of pathogenic species from the genus of Leptospira. Human can be infected by the leptospirosis from direct or indirect exposure to the urine of infected animals. The excretion of urine from the animal host that carries pathogenic Leptospira causes the soil or water to be contaminated. Therefore, people can become infected when they are exposed to contaminated soil and water by cut on the skin as well as open wound. It also can enter the human body by mucous membrane such nose, eyes and mouth, for example by splashing contaminated water or urine into the eyes or swallowing contaminated water or food. Currently, there is no vaccine available for the prevention or treatment of leptospirosis disease but this disease can be treated if it is diagnosed early to avoid any complication. The disease risk mapping is important in a way to control and prevention of disease. Using a good choice of statistical model will produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for leptospirosis disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and Poisson-gamma model. This paper begins by providing a review of the SMR method and Poisson-gamma model, which we then applied to leptospirosis data of Kelantan, Malaysia. Both results are displayed and compared using graph, tables and maps. The result shows that the second method Poisson-gamma model produces better relative risk estimates compared to the SMR method. This is because the Poisson-gamma model can overcome the drawback of SMR where the relative risk will become zero when there is no observed leptospirosis case in certain regions. However, the Poisson-gamma model also faced problems where the covariate adjustment for this model is difficult and no possibility for allowing spatial correlation between risks in neighbouring areas. The problems of this model have

  7. Inflatable Re-Entry Vehicle Experiment (IRVE) Design Overview

    NASA Technical Reports Server (NTRS)

    Hughes, Stephen J.; Dillman, Robert A.; Starr, Brett R.; Stephan, Ryan A.; Lindell, Michael C.; Player, Charles J.; Cheatwood, F. McNeil

    2005-01-01

    integrity when exposed to a relevant dynamic pressure and aerodynamic stability of the inflatable system. Structural integrity and structural response of the inflatable will be verified with photogrammetric measurements of the back side of the aeroshell in flight. Aerodynamic stability as well as drag performance will be verified with on board inertial measurements and radar tracking from multiple ground radar stations. The experiment will yield valuable information about zero-g vacuum deployment dynamics of the flexible inflatable structure with both inertial and photographic measurements. In addition to demonstrating inflatable technology, IRVE will validate structural, aerothermal, and trajectory modeling techniques for the inflatable. Structural response determined from photogrammetrics will validate structural models, skin temperature measurements and additional in-depth temperature measurements will validate material thermal performance models, and on board inertial measurements along with radar tracking from multiple ground radar stations will validate trajectory simulation models.

  8. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    PubMed

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine

  9. Prediction of vehicle crashes by drivers' characteristics and past traffic violations in Korea using a zero-inflated negative binomial model.

    PubMed

    Kim, Dae-Hwan; Ramjan, Lucie M; Mak, Kwok-Kei

    2016-01-01

    Traffic safety is a significant public health challenge, and vehicle crashes account for the majority of injuries. This study aims to identify whether drivers' characteristics and past traffic violations may predict vehicle crashes in Korea. A total of 500,000 drivers were randomly selected from the 11.6 million driver records of the Ministry of Land, Transport and Maritime Affairs in Korea. Records of traffic crashes were obtained from the archives of the Korea Insurance Development Institute. After matching the past violation history for the period 2004-2005 with the number of crashes in year 2006, a total of 488,139 observations were used for the analysis. Zero-inflated negative binomial model was used to determine the incident risk ratio (IRR) of vehicle crashes by past violations of individual drivers. The included covariates were driver's age, gender, district of residence, vehicle choice, and driving experience. Drivers violating (1) a hit-and-run or drunk driving regulation at least once and (2) a signal, central line, or speed regulation more than once had a higher risk of a vehicle crash with respective IRRs of 1.06 and 1.15. Furthermore, female gender, a younger age, fewer years of driving experience, and middle-sized vehicles were all significantly associated with a higher likelihood of vehicle crashes. Drivers' demographic characteristics and past traffic violations could predict vehicle crashes in Korea. Greater resources should be assigned to the provision of traffic safety education programs for the high-risk driver groups.

  10. Replenishment policy for an inventory model under inflation

    NASA Astrophysics Data System (ADS)

    Singh, Vikramjeet; Saxena, Seema; Singh, Pushpinder; Mishra, Nitin Kumar

    2017-07-01

    The purpose of replenishment is to keep the flow of inventory in the system. To determine an optimal replenishment policy is a great challenge in developing an inventory model. Inflation is defined as the rate at which the prices of goods and services are rising over a time period. The cost parameters are affected by the rate of inflation. High rate of inflation affects the organizations financial conditions. Based on the above backdrop the present paper proposes the retailers replenishment policy for deteriorating items with different cycle lengths under inflation. The shortages are partially backlogged. At last numerical examples validate the results.

  11. First-order inflation

    NASA Technical Reports Server (NTRS)

    Kolb, Edward W.

    1991-01-01

    In the original proposal, inflation occurred in the process of a strongly first-order phase transition. This model was soon demonstrated to be fatally flawed. Subsequent models for inflation involved phase transitions that were second-order, or perhaps weakly first-order; some even involved no phase transition at all. Recently the possibility of inflation during a strongly first-order phase transition has been revived. In this talk I will discuss some models for first-order inflation, and emphasize unique signatures that result if inflation is realized in a first-order transition. Before discussing first-order inflation, I will briefly review some of the history of inflation to demonstrate how first-order inflation differs from other models.

  12. Minimal supergravity models of inflation

    NASA Astrophysics Data System (ADS)

    Ferrara, Sergio; Kallosh, Renata; Linde, Andrei; Porrati, Massimo

    2013-10-01

    We present a superconformal master action for a class of supergravity models with one arbitrary function defining the Jordan frame. It leads to a gauge-invariant action for a real vector multiplet, which upon gauge fixing describes a massive vector multiplet, or to a dual formulation with a linear multiplet and a massive tensor field. In both cases the models have one real scalar, the inflaton, naturally suited for single-field inflation. Vectors and tensors required by supersymmetry to complement a single real scalar do not acquire vacuum expectation values during inflation, so there is no need to stabilize the extra scalars that are always present in the theories with chiral matter multiplets. The new class of models can describe any inflaton potential that vanishes at its minimum and grows monotonically away from the minimum. In this class of supergravity models, one can fit any desirable choice of inflationary parameters ns and r.

  13. Computation of solar perturbations with Poisson series

    NASA Technical Reports Server (NTRS)

    Broucke, R.

    1974-01-01

    Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.

  14. A random-censoring Poisson model for underreported data.

    PubMed

    de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins

    2017-12-30

    A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Universality of the Volume Bound in Slow-Roll Eternal Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubovsky, Sergei; Senatore, Leonardo; Villadoro, Giovanni

    2012-03-28

    It has recently been shown that in single field slow-roll inflation the total volume cannot grow by a factor larger than e{sup S{sub dS}/2} without becoming infinite. The bound is saturated exactly at the phase transition to eternal inflation where the probability to produce infinite volume becomes non zero. We show that the bound holds sharply also in any space-time dimensions, when arbitrary higher-dimensional operators are included and in the multi-field inflationary case. The relation with the entropy of de Sitter and the universality of the bound strengthen the case for a deeper holographic interpretation. As a spin-off we providemore » the formalism to compute the probability distribution of the volume after inflation for generic multi-field models, which might help to address questions about the population of vacua of the landscape during slow-roll inflation.« less

  16. Hierarchical modeling of bycatch rates of sea turtles in the western North Atlantic

    USGS Publications Warehouse

    Gardner, B.; Sullivan, P.J.; Epperly, S.; Morreale, S.J.

    2008-01-01

    Previous studies indicate that the locations of the endangered loggerhead Caretta caretta and critically endangered leatherback Dermochelys coriacea sea turtles are influenced by water temperatures, and that incidental catch rates in the pelagic longline fishery vary by region. We present a Bayesian hierarchical model to examine the effects of environmental variables, including water temperature, on the number of sea turtles captured in the US pelagic longline fishery in the western North Atlantic. The modeling structure is highly flexible, utilizes a Bayesian model selection technique, and is fully implemented in the software program WinBUGS. The number of sea turtles captured is modeled as a zero-inflated Poisson distribution and the model incorporates fixed effects to examine region-specific differences in the parameter estimates. Results indicate that water temperature, region, bottom depth, and target species are all significant predictors of the number of loggerhead sea turtles captured. For leatherback sea turtles, the model with only target species had the most posterior model weight, though a re-parameterization of the model indicates that temperature influences the zero-inflation parameter. The relationship between the number of sea turtles captured and the variables of interest all varied by region. This suggests that management decisions aimed at reducing sea turtle bycatch may be more effective if they are spatially explicit. ?? Inter-Research 2008.

  17. A viable logarithmic f(R) model for inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amin, M.; Khalil, S.; Salah, M.

    2016-08-18

    Inflation in the framework of f(R) modified gravity is revisited. We study the conditions that f(R) should satisfy in order to lead to a viable inflationary model in the original form and in the Einstein frame. Based on these criteria we propose a new logarithmic model as a potential candidate for f(R) theories aiming to describe inflation consistent with observations from Planck satellite (2015). The model predicts scalar spectral index 0.9615

  18. A generalized right truncated bivariate Poisson regression model with applications to health data.

    PubMed

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  19. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    PubMed

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  20. A generalized right truncated bivariate Poisson regression model with applications to health data

    PubMed Central

    Islam, M. Ataharul; Chowdhury, Rafiqul I.

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. PMID:28586344

  1. Double-observer line transect surveys with Markov-modulated Poisson process models for animal availability.

    PubMed

    Borchers, D L; Langrock, R

    2015-12-01

    We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  2. Modeling environmental noise exceedances using non-homogeneous Poisson processes.

    PubMed

    Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R

    2014-10-01

    In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.

  3. Application of Poisson random effect models for highway network screening.

    PubMed

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Application of the Conway-Maxwell-Poisson generalized linear model for analyzing motor vehicle crashes.

    PubMed

    Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy

    2008-05-01

    This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.

  5. Extensions of Rasch's Multiplicative Poisson Model.

    ERIC Educational Resources Information Center

    Jansen, Margo G. H.; van Duijn, Marijtje A. J.

    1992-01-01

    A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)

  6. Fitting statistical distributions to sea duck count data: implications for survey design and abundance estimation

    USGS Publications Warehouse

    Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.

    2014-01-01

    Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are

  7. Pseudosmooth tribrid inflation

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Nolde, David; Rehman, Mansoor Ur

    2012-08-01

    We explore a new class of supersymmetric models of inflation where the inflaton is realised as a combination of a Higgs field and (gauge non-singlet) matter fields, using a ``tribrid'' structure of the superpotential. Inflation is associated with a phase transition around GUT scale energies. The inflationary trajectory already preselects the later vacuum after inflation, which has the advantage of automatically avoiding the production of dangerous topological defects at the end of inflation. While at first sight the models look similar to smooth inflation, they feature a waterfall and are therefore only pseudosmooth. The new class of models offers novel possibilities for realising inflation in close contact with particle physics, for instance with supersymmetric GUTs or with supersymmetric flavour models based on family symmetries.

  8. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  9. A Generalized QMRA Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with K min = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model. © 2016 Society for Risk Analysis.

  10. Tobit analysis of vehicle accident rates on interstate highways.

    PubMed

    Anastasopoulos, Panagiotis Ch; Tarko, Andrew P; Mannering, Fred L

    2008-03-01

    There has been an abundance of research that has used Poisson models and its variants (negative binomial and zero-inflated models) to improve our understanding of the factors that affect accident frequencies on roadway segments. This study explores the application of an alternate method, tobit regression, by viewing vehicle accident rates directly (instead of frequencies) as a continuous variable that is left-censored at zero. Using data from vehicle accidents on Indiana interstates, the estimation results show that many factors relating to pavement condition, roadway geometrics and traffic characteristics significantly affect vehicle accident rates.

  11. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  12. QMRA for Drinking Water: 2. The Effect of Pathogen Clustering in Single-Hit Dose-Response Models.

    PubMed

    Nilsen, Vegard; Wyller, John

    2016-01-01

    Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson-distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional "single-hit" dose-response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose-response models in terms of probability generating functions. It is shown formally that the theoretical single-hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single-hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single-hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose-response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose-response assessment as well as practical risk characterization are discussed. © 2016 Society for Risk Analysis.

  13. Beta-Poisson model for single-cell RNA-seq data analyses.

    PubMed

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Supernatural inflation: inflation from supersymmetry with no (very) small parameters

    NASA Astrophysics Data System (ADS)

    Randall, Lisa; SoljačiĆ, Marin; Guth, Alan H.

    1996-02-01

    Most models of inflation have small parameters, either to guarantee sufficient inflation or the correct magnitude of the density perturbations. In this paper we show that, in supersymmetric theories with weak-scale supersymmetry breaking, one can construct viable inflationary models in which the requisite parameters appear naturally in the form of the ratio of mass scales that are already present in the theory. Successful inflationary models can be constructed from the flat-direction fields of a renormalizable supersymmetric potential, and such models can be realized even in the context of a simple GUT extension of the MSSM. We evade naive ``naturalness'' arguments by allowing for more than one field to be relevant to inflation, as in ``hybrid inflation'' models, and we argue that this is the most natural possibility if inflation fields are to be associated with flat direction fields of a supersymmetric theory. Such models predict a very low Hubble constant during inflation, of order 103-104 GeV, a scalar density perturbation index n which is very close to or greater than unity, and negligible tensor perturbations. In addition, these models lead to a large spike in the density perturbation spectrum at short wavelengths.

  15. Two-field axion-monodromy hybrid inflation model: Dante's Waterfall

    NASA Astrophysics Data System (ADS)

    Carone, Christopher D.; Erlich, Joshua; Sensharma, Anuraag; Wang, Zhen

    2015-02-01

    We describe a hybrid axion-monodromy inflation model motivated by the Dante's Inferno scenario. In Dante's Inferno, a two-field potential features a stable trench along which a linear combination of the two fields slowly rolls, rendering the dynamics essentially identical to that of single-field chaotic inflation. A shift symmetry allows for the Lyth bound to be effectively evaded as in other axion-monodromy models. In our proposal, the potential is concave downward near the origin and the inflaton trajectory is a gradual downward spiral, ending at a point where the trench becomes unstable. There, the fields begin falling rapidly towards the minimum of the potential and inflation terminates as in a hybrid model. We find parameter choices that reproduce observed features of the cosmic microwave background, and discuss our model in light of recent results from the BICEP2 and Planck experiments.

  16. Nambu-Poisson gauge theory

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2014-06-01

    We generalize noncommutative gauge theory using Nambu-Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg-Witten map. We construct a covariant Nambu-Poisson gauge theory action, give its first order expansion in the Nambu-Poisson tensor and relate it to a Nambu-Poisson matrix model.

  17. Comparison of the Nernst-Planck model and the Poisson-Boltzmann model for electroosmotic flows in microchannels.

    PubMed

    Park, H M; Lee, J S; Kim, T W

    2007-11-15

    In the analysis of electroosmotic flows, the internal electric potential is usually modeled by the Poisson-Boltzmann equation. The Poisson-Boltzmann equation is derived from the assumption of thermodynamic equilibrium where the ionic distributions are not affected by fluid flows. Although this is a reasonable assumption for steady electroosmotic flows through straight microchannels, there are some important cases where convective transport of ions has nontrivial effects. In these cases, it is necessary to adopt the Nernst-Planck equation instead of the Poisson-Boltzmann equation to model the internal electric field. In the present work, the predictions of the Nernst-Planck equation are compared with those of the Poisson-Boltzmann equation for electroosmotic flows in various microchannels where the convective transport of ions is not negligible.

  18. Inflation and dark energy from f(R) gravity

    NASA Astrophysics Data System (ADS)

    Artymowski, Michał; Lalak, Zygmunt

    2014-09-01

    The standard Starobinsky inflation has been extended to the R + α Rn - β R2-n model to obtain a stable minimum of the Einstein frame scalar potential of the auxiliary field. As a result we have obtained obtain a scalar potential with non-zero value of residual vacuum energy, which may be a source of Dark Energy. Our results can be easily consistent with PLANCK or BICEP2 data for appropriate choices of the value of n.

  19. Seeded hot dark matter models with inflation

    NASA Technical Reports Server (NTRS)

    Gratsias, John; Scherrer, Robert J.; Steigman, Gary; Villumsen, Jens V.

    1993-01-01

    We examine massive neutrino (hot dark matter) models for large-scale structure in which the density perturbations are produced by randomly distributed relic seeds and by inflation. Power spectra, streaming velocities, and the Sachs-Wolfe quadrupole fluctuation are derived for this model. We find that the pure seeded hot dark matter model without inflation produces Sachs-Wolfe fluctuations far smaller than those seen by COBE. With the addition of inflationary perturbations, fluctuations consistent with COBE can be produced. The COBE results set the normalization of the inflationary component, which determines the large-scale (about 50/h Mpc) streaming velocities. The normalization of the seed power spectrum is a free parameter, which can be adjusted to obtain the desired fluctuations on small scales. The power spectra produced are very similar to those seen in mixed hot and cold dark matter models.

  20. Non-linear properties of metallic cellular materials with a negative Poisson's ratio

    NASA Technical Reports Server (NTRS)

    Choi, J. B.; Lakes, R. S.

    1992-01-01

    Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.

  1. Hubble induced mass after inflation in spectator field models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujita, Tomohiro; Harigaya, Keisuke, E-mail: tomofuji@stanford.edu, E-mail: keisukeh@icrr.u-tokyo.ac.jp

    2016-12-01

    Spectator field models such as the curvaton scenario and the modulated reheating are attractive scenarios for the generation of the cosmic curvature perturbation, as the constraints on inflation models are relaxed. In this paper, we discuss the effect of Hubble induced masses on the dynamics of spectator fields after inflation. We pay particular attention to the Hubble induced mass by the kinetic energy of an oscillating inflaton, which is generically unsuppressed but often overlooked. In the curvaton scenario, the Hubble induced mass relaxes the constraint on the property of the inflaton and the curvaton, such as the reheating temperature andmore » the inflation scale. We comment on the implication of our discussion for baryogenesis in the curvaton scenario. In the modulated reheating, the predictions of models e.g. the non-gaussianity can be considerably altered. Furthermore, we propose a new model of the modulated reheating utilizing the Hubble induced mass which realizes a wide range of the local non-gaussianity parameter.« less

  2. Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Sudarno

    2018-05-01

    The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).

  3. Soft inflation

    NASA Technical Reports Server (NTRS)

    Berkin, Andrew L.; Maeda, Kei-Ichi; Yokoyama, Junichi

    1990-01-01

    The cosmology resulting from two coupled scalar fields was studied, one which is either a new inflation or chaotic type inflation, and the other which has an exponentially decaying potential. Such a potential may appear in the conformally transformed frame of generalized Einstein theories like the Jordan-Brans-Dicke theory. The constraints necessary for successful inflation are examined. Conventional GUT models such as SU(5) were found to be compatible with new inflation, while restrictions on the self-coupling constant are significantly loosened for chaotic inflation.

  4. Structural Modeling of a Five-Meter Thin Film Inflatable Antenna/Concentrator

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Taylor, W. Scott; Brunty, Joseph A. (Technical Monitor)

    2002-01-01

    Inflatable structures have been the subject of renewed interest in recent years for space applications such as communications antennas, solar thermal propulsion, and space solar power. A major advantage of using inflatable structures in space is their extremely light weight. An obvious second advantage is on-orbit deployability and related space savings in the launch configuration. A recent technology demonstrator flight for inflatable structures was the Inflatable Antenna Experiment (IAE) that was deployed on orbit from the Shuttle Orbiter. Although difficulty was encountered in the inflation/deployment phase, the flight was successful overall and provided valuable experience in the use of such structures. Several papers on static structural analysis of inflated cylinders have been written, describing different techniques such as linear shell theory, and nonlinear and variational methods, but very little work had been done in dynamics of inflatable structures until recent years. In 1988 Leonard indicated that elastic beam bending modes could be utilized in approximating lower-order frequencies of inflatable beams. Main, et al. wrote a very significant 1995 paper describing results of modal tests of inflated cantilever beams and the determination of effective material properties. Changes in material properties for different pressures were also discussed, and the beam model was used in a more complex structure. The paper demonstrated that conventional finite element analysis packages could be very useful in the analysis of complex inflatable structures. The purposes of this paper are to discuss the methodology for dynamically characterizing a large 5-meter thin film inflatable reflector, and to discuss the test arrangement and results. Nonlinear finite element modal results are compared to modal test data. The work is significant and of considerable interest to researchers because of 1) the large size of the structure, making it useful for scaling studies, and 2

  5. First-order inflation. [in cosmology

    NASA Technical Reports Server (NTRS)

    Kolb, Edward W.

    1991-01-01

    In the original proposal, inflation occurred in the process of a strongly first-order phase transition. This model was soon demonstrated to be fatally flawed. Subsequent models for inflation involved phase transitions that were second-order, or perhaps weakly first-order; some even involved no phase transition at all. Recently the possibility of inflation during a strongly first-order phase transition has been revived. In this paper, some models for first-order inflation are discussed, and unique signatures that result if inflation is realized in a first-order transition are emphasized. Some of the history of inflation is reviewed to demonstrate how first-order inflation differs from other models.

  6. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  7. Observing the inflation potential. [in models of cosmological inflation

    NASA Technical Reports Server (NTRS)

    Copeland, Edmund J.; Kolb, Edward W.; Liddle, Andrew R.; Lidsey, James E.

    1993-01-01

    We show how observations of the density perturbation (scalar) spectrum and the gravitational wave (tensor) spectrum allow a reconstruction of the potential responsible for cosmological inflation. A complete functional reconstruction or a perturbative approximation about a single scale are possible; the suitability of each approach depends on the data available. Consistency equations between the scalar and tensor spectra are derived, which provide a powerful signal of inflation.

  8. Modeling forest fire occurrences using count-data mixed models in Qiannan autonomous prefecture of Guizhou province in China.

    PubMed

    Xiao, Yundan; Zhang, Xiongqing; Ji, Ping

    2015-01-01

    Forest fires can cause catastrophic damage on natural resources. In the meantime, it can also bring serious economic and social impacts. Meteorological factors play a critical role in establishing conditions favorable for a forest fire. Effective prediction of forest fire occurrences could prevent or minimize losses. This paper uses count data models to analyze fire occurrence data which is likely to be dispersed and frequently contain an excess of zero counts (no fire occurrence). Such data have commonly been analyzed using count data models such as a Poisson model, negative binomial model (NB), zero-inflated models, and hurdle models. Data we used in this paper is collected from Qiannan autonomous prefecture of Guizhou province in China. Using the fire occurrence data from January to April (spring fire season) for the years 1996 through 2007, we introduced random effects to the count data models. In this study, the results indicated that the prediction achieved through NB model provided a more compelling and credible inferential basis for fitting actual forest fire occurrence, and mixed-effects model performed better than corresponding fixed-effects model in forest fire forecasting. Besides, among all meteorological factors, we found that relative humidity and wind speed is highly correlated with fire occurrence.

  9. Modeling Forest Fire Occurrences Using Count-Data Mixed Models in Qiannan Autonomous Prefecture of Guizhou Province in China

    PubMed Central

    Ji, Ping

    2015-01-01

    Forest fires can cause catastrophic damage on natural resources. In the meantime, it can also bring serious economic and social impacts. Meteorological factors play a critical role in establishing conditions favorable for a forest fire. Effective prediction of forest fire occurrences could prevent or minimize losses. This paper uses count data models to analyze fire occurrence data which is likely to be dispersed and frequently contain an excess of zero counts (no fire occurrence). Such data have commonly been analyzed using count data models such as a Poisson model, negative binomial model (NB), zero-inflated models, and hurdle models. Data we used in this paper is collected from Qiannan autonomous prefecture of Guizhou province in China. Using the fire occurrence data from January to April (spring fire season) for the years 1996 through 2007, we introduced random effects to the count data models. In this study, the results indicated that the prediction achieved through NB model provided a more compelling and credible inferential basis for fitting actual forest fire occurrence, and mixed-effects model performed better than corresponding fixed-effects model in forest fire forecasting. Besides, among all meteorological factors, we found that relative humidity and wind speed is highly correlated with fire occurrence. PMID:25790309

  10. Nonlinear Poisson Equation for Heterogeneous Media

    PubMed Central

    Hu, Langhua; Wei, Guo-Wei

    2012-01-01

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937

  11. Nonlinear Poisson equation for heterogeneous media.

    PubMed

    Hu, Langhua; Wei, Guo-Wei

    2012-08-22

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  13. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  14. Understanding poisson regression.

    PubMed

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  15. The Poisson model limits in NBA basketball: Complexity in team sports

    NASA Astrophysics Data System (ADS)

    Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa

    2016-12-01

    Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.

  16. Poisson-Boltzmann-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-01

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  17. Fast or Slow Rescue Ventilations: A Predictive Model of Gastric Inflation.

    PubMed

    Fitz-Clarke, John R

    2018-05-01

    Rescue ventilations are given during respiratory and cardiac arrest. Tidal volume must assure oxygen delivery; however, excessive pressure applied to an unprotected airway can cause gastric inflation, regurgitation, and pulmonary aspiration. The optimal technique provides mouth pressure and breath duration that minimize gastric inflation. It remains unclear if breath delivery should be fast or slow, and how inflation time affects the division of gas flow between the lungs and esophagus. A physiological model was used to predict and compare rates of gastric inflation and to determine ideal ventilation duration. Gas flow equations were based on standard pulmonary physiology. Gastric inflation was assumed to occur whenever mouth pressure exceeded lower esophageal sphincter pressure. Mouth pressure profiles that approximated mouth-to-mouth ventilation and bag-valve-mask ventilation were investigated. Target tidal volumes were set to 0.6 and 1.0 L. Compliance and airway resistance were varied. Rapid breaths shorter than 1 s required high mouth pressures, up to 25 cm H 2 O to achieve the target lung volume, which thus promotes gastric inflation. Slow breaths longer than 1 s permitted lower mouth pressures but increased time over which airway pressure exceeded lower esophageal sphincter pressure. The gastric volume increased with breath durations that exceeded 1 s for both mouth pressure profiles. Breath duration of ∼1.0 s caused the least gastric inflation in most scenarios. Very low esophageal sphincter pressure favored a shift toward 0.5 s. High resistance and low compliance each increased gastric inflation and altered ideal breath times. The model illustrated a general theory of optimal rescue ventilation. Breath duration with an unprotected airway should be 1 s to minimize gastric inflation. Short pressure-driven and long duration-driven gastric inflation regimens provide a unifying explanation for results in past studies. Copyright © 2018 by Daedalus Enterprises.

  18. A physiologically based nonhomogeneous Poisson counter model of visual identification.

    PubMed

    Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren

    2018-04-30

    A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. The Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Everett, James E.

    1993-01-01

    Addresses objections to the validity of assuming a Poisson loglinear model as the generating process for citations from one journal into another. Fluctuations in citation rate, serial dependence on citations, impossibility of distinguishing between rate changes and serial dependence, evidence for changes in Poisson rate, and transitivity…

  20. The BRST complex of homological Poisson reduction

    NASA Astrophysics Data System (ADS)

    Müller-Lennert, Martin

    2017-02-01

    BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.

  1. Horizon feedback inflation

    NASA Astrophysics Data System (ADS)

    Fairbairn, Malcolm; Markkanen, Tommi; Rodriguez Roman, David

    2018-04-01

    We consider the effect of the Gibbons-Hawking radiation on the inflaton in the situation where it is coupled to a large number of spectator fields. We argue that this will lead to two important effects - a thermal contribution to the potential and a gradual change in parameters in the Lagrangian which results from thermodynamic and energy conservation arguments. We present a scenario of hilltop inflation where the field starts trapped at the origin before slowly experiencing a phase transition during which the field extremely slowly moves towards its zero temperature expectation value. We show that it is possible to obtain enough e-folds of expansion as well as the correct spectrum of perturbations without hugely fine-tuned parameters in the potential (albeit with many spectator fields). We also comment on how initial conditions for inflation can arise naturally in this situation.

  2. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2017-07-01

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.

  3. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  4. Bounce inflation cosmology with Standard Model Higgs boson

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Youping; Huang, Fa Peng; Zhang, Xinmin

    It is of great interest to connect cosmology in the early universe to the Standard Model of particle physics. In this paper, we try to construct a bounce inflation model with the standard model Higgs boson, where the one loop correction is taken into account in the effective potential of Higgs field. In this model, a Galileon term has been introduced to eliminate the ghost mode when bounce happens. Moreover, due to the fact that the Fermion loop correction can make part of the Higgs potential negative, one naturally obtains a large equation of state(EoS) parameter in the contracting phase,more » which can eliminate the anisotropy problem. After the bounce, the model can drive the universe into the standard higgs inflation phase, which can generate nearly scale-invariant power spectrum.« less

  5. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  6. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  8. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    PubMed

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  9. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Anisotropic inflation with a non-minimally coupled electromagnetic field to gravity

    NASA Astrophysics Data System (ADS)

    Adak, Muzaffer; Akarsu, Özgür; Dereli, Tekin; Sert, Özcan

    2017-11-01

    We consider the non-minimal model of gravity in Y(R) F2-form. We investigate a particular case of the model, for which the higher order derivatives are eliminated but the scalar curvature R is kept to be dynamical via the constraint YRFmnFmn =-2/κ2. The effective fluid obtained can be represented by interacting electromagnetic field and vacuum depending on Y(R), namely, the energy density of the vacuum tracks R while energy density of the conventional electromagnetic field is dynamically scaled with the factor Y(R)/2. We give exact solutions for anisotropic inflation by assuming the volume scale factor of the Universe exhibits a power-law expansion. The directional scale factors do not necessarily exhibit power-law expansion, which would give rise to a constant expansion anisotropy, but expand non-trivially and give rise to a non-monotonically evolving expansion anisotropy that eventually converges to a non-zero constant. Relying on this fact, we discuss the anisotropic e-fold during the inflation by considering observed scale invariance in CMB and demanding the Universe to undergo the same amount of e-folds in all directions. We calculate the residual expansion anisotropy at the end of inflation, though as a result of non-monotonic behaviour of expansion anisotropy all the axes of the Universe undergo the same of amount of e-folds by the end of inflation. We also discuss the generation of the modified electromagnetic field during the first few e-folds of the inflation and its persistence against to the vacuum till end of inflation.

  11. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    NASA Astrophysics Data System (ADS)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  12. Poisson sigma models, reduction and nonlinear gauge theories

    NASA Astrophysics Data System (ADS)

    Signori, Daniele

    This dissertation comprises two main lines of research. Firstly, we study non-linear gauge theories for principal bundles, where the structure group is replaced by a Lie groupoid. We follow the approach of Moerdijk-Mrcun and establish its relation with the existing physics literature. In particular, we derive a new formula for the gauge transformation which closely resembles and generalizes the classical formulas found in Yang Mills gauge theories. Secondly, we give a field theoretic interpretation of the of the BRST (Becchi-Rouet-Stora-Tyutin) and BFV (Batalin-Fradkin-Vilkovisky) methods for the reduction of coisotropic submanifolds of Poisson manifolds. The generalized Poisson sigma models that we define are related to the quantization deformation problems of coisotropic submanifolds using homotopical algebras.

  13. Modulus D-term inflation

    NASA Astrophysics Data System (ADS)

    Kadota, Kenji; Kobayashi, Tatsuo; Saga, Ikumi; Sumita, Keigo

    2018-04-01

    We propose a new model of single-field D-term inflation in supergravity, where the inflation is driven by a single modulus field which transforms non-linearly under the U(1) gauge symmetry. One of the notable features of our modulus D-term inflation scenario is that the global U(1) remains unbroken in the vacuum and hence our model is not plagued by the cosmic string problem which can exclude most of the conventional D-term inflation models proposed so far due to the CMB observations.

  14. Mixed Poisson distributions in exact solutions of stochastic autoregulation models.

    PubMed

    Iyer-Biswas, Srividya; Jayaprakash, C

    2014-11-01

    In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.

  15. Poisson-Boltzmann-Nernst-Planck model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Qiong; Wei Guowei; Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan 48824

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species inmore » the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and

  16. Poisson-Boltzmann-Nernst-Planck model.

    PubMed

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  17. Bringing consistency to simulation of population models--Poisson simulation as a bridge between micro and macro simulation.

    PubMed

    Gustafsson, Leif; Sternad, Mikael

    2007-10-01

    Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.

  18. How thermal inflation can save minimal hybrid inflation in supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimopoulos, Konstantinos; Owen, Charlotte

    2016-10-12

    Minimal hybrid inflation in supergravity has been ruled out by the 2015 Planck observations because the spectral index of the produced curvature perturbation falls outside observational bounds. To resurrect the model, a number of modifications have been put forward but many of them spoil the accidental cancellation that resolves the η-problem and require complicated Kähler constructions to counterbalance the lost cancellation. In contrast, in this paper the model is rendered viable by supplementing the scenario with a brief period of thermal inflation, which follows the reheating of primordial inflation. The scalar field responsible for thermal inflation requires a large non-zeromore » vacuum expectation value (VEV) and a flat potential. We investigate the VEV of such a flaton field and its subsequent effect on the inflationary observables. We find that, for large VEV, minimal hybrid inflation in supergravity produces a spectral index within the 1-σ Planck bound and a tensor-to-scalar ratio which may be observable in the near future. The mechanism is applicable to other inflationary models.« less

  19. Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)

    NASA Astrophysics Data System (ADS)

    Maulia, Eva; Miftahuddin; Sofyan, Hizir

    2018-05-01

    A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.

  20. Inflation from Minkowski space

    DOE PAGES

    Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico; ...

    2014-12-23

    Here, we propose a class of scalar models that, once coupled to gravity, lead to cosmologies that smoothly and stably connect an inflationary quasi-de Sitter universe to a low, or even zero-curvature, maximally symmetric spacetime in the asymptotic past, strongly violating the null energy condition (H • >>H2) at intermediate times. The models are deformations of the conformal galileon lagrangian and are therefore based on symmetries, both exact and approximate, that ensure the quantum robustness of the whole picture. The resulting cosmological backgrounds can be viewed as regularized extensions of the galilean genesis scenario, or, equivalently, as ‘early-time-complete’ realizations ofmore » inflation. The late-time inflationary dynamics possesses phenomenologically interesting properties: it can produce a large tensor-to-scalar ratio within the regime of validity of the effective field theory and can lead to sizeable equilateral nongaussianities.« less

  1. Poisson-Like Spiking in Circuits with Probabilistic Synapses

    PubMed Central

    Moreno-Bote, Rubén

    2014-01-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  2. Dielectric Self-Energy in Poisson-Boltzmann and Poisson-Nernst-Planck Models of Ion Channels

    PubMed Central

    Corry, Ben; Kuyucak, Serdar; Chung, Shin-Ho

    2003-01-01

    We demonstrated previously that the two continuum theories widely used in modeling biological ion channels give unreliable results when the radius of the conduit is less than two Debye lengths. The reason for this failure is the neglect of surface charges on the protein wall induced by permeating ions. Here we attempt to improve the accuracy of the Poisson-Boltzmann and Poisson-Nernst-Planck theories, when applied to channel-like environments, by including a specific dielectric self-energy term to overcome spurious shielding effects inherent in these theories. By comparing results with Brownian dynamics simulations, we show that the inclusion of an additional term in the equations yields significant qualitative improvements. The modified theories perform well in very wide and very narrow channels, but are less successful at intermediate sizes. The situation is worse in multi-ion channels because of the inability of the continuum theories to handle the ion-to-ion interactions correctly. Thus, further work is required if these continuum theories are to be reliably salvaged for quantitative studies of biological ion channels in all situations. PMID:12770869

  3. Microwave background anisotropies in quasiopen inflation

    NASA Astrophysics Data System (ADS)

    García-Bellido, Juan; Garriga, Jaume; Montes, Xavier

    1999-10-01

    Quasiopenness seems to be generic to multifield models of single-bubble open inflation. Instead of producing infinite open universes, these models actually produce an ensemble of very large but finite inflating islands. In this paper we study the possible constraints from CMB anisotropies on existing models of open inflation. The effect of supercurvature anisotropies combined with the quasiopenness of the inflating regions make some models incompatible with observations, and severely reduces the parameter space of others. Supernatural open inflation and the uncoupled two-field model seem to be ruled out due to these constraints for values of Ω0<~0.98. Others, such as the open hybrid inflation model with suitable parameters for the slow roll potential can be made compatible with observations.

  4. Simple cosmological model with inflation and late times acceleration

    NASA Astrophysics Data System (ADS)

    Szydłowski, Marek; Stachowski, Aleksander

    2018-03-01

    In the framework of polynomial Palatini cosmology, we investigate a simple cosmological homogeneous and isotropic model with matter in the Einstein frame. We show that in this model during cosmic evolution, early inflation appears and the accelerating phase of the expansion for the late times. In this frame we obtain the Friedmann equation with matter and dark energy in the form of a scalar field with a potential whose form is determined in a covariant way by the Ricci scalar of the FRW metric. The energy density of matter and dark energy are also parameterized through the Ricci scalar. Early inflation is obtained only for an infinitesimally small fraction of energy density of matter. Between the matter and dark energy, there exists an interaction because the dark energy is decaying. For the characterization of inflation we calculate the slow roll parameters and the constant roll parameter in terms of the Ricci scalar. We have found a characteristic behavior of the time dependence of density of dark energy on the cosmic time following the logistic-like curve which interpolates two almost constant value phases. From the required numbers of N-folds we have found a bound on the model parameter.

  5. Poisson regression models outperform the geometrical model in estimating the peak-to-trough ratio of seasonal variation: a simulation study.

    PubMed

    Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C

    2011-12-01

    Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. Gravitational waves from warm inflation

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bin; Wang, He; Zhu, Jian-Yang

    2018-03-01

    A fundamental prediction of inflation is a nearly scale-invariant spectrum of gravitational wave. The features of such a signal provide extremely important information about the physics of the early universe. In this paper, we focus on several topics about warm inflation. First, we discuss the stability property about warm inflation based on nonequilibrium statistical mechanics, which gives more fundamental physical illustrations to thermal property of such model. Then, we calculate the power spectrum of gravitational waves generated during warm inflation, in which there are three components contributing to such spectrum: thermal term, quantum term, and cross term combining the both. We also discuss some interesting properties about these terms and illustrate them in different panels. As a model different from cold inflation, warm inflation model has its individual properties in observational practice, so we finally give a discussion about the observational effect to distinguish it from cold inflation.

  7. Prediction of forest fires occurrences with area-level Poisson mixed models.

    PubMed

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimčík, Ctirad

    2015-12-15

    We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.

  9. Mutated hilltop inflation revisited

    NASA Astrophysics Data System (ADS)

    Pal, Barun Kumar

    2018-05-01

    In this work we re-investigate pros and cons of mutated hilltop inflation. Applying Hamilton-Jacobi formalism we solve inflationary dynamics and find that inflation goes on along the {W}_{-1} branch of the Lambert function. Depending on the model parameter mutated hilltop model renders two types of inflationary solutions: one corresponds to small inflaton excursion during observable inflation and the other describes large field inflation. The inflationary observables from curvature perturbation are in tune with the current data for a wide range of the model parameter. The small field branch predicts negligible amount of tensor to scalar ratio r˜ O(10^{-4}), while the large field sector is capable of generating high amplitude for tensor perturbations, r˜ O(10^{-1}). Also, the spectral index is almost independent of the model parameter along with a very small negative amount of scalar running. Finally we find that the mutated hilltop inflation closely resembles the α -attractor class of inflationary models in the limit of α φ ≫ 1.

  10. Inflation at the electroweak scale

    NASA Technical Reports Server (NTRS)

    Knox, Lloyd; Turner, Michael S.

    1993-01-01

    We present a model for slow-rollover inflation where the vacuum energy that drives inflation is of the order of G(F) exp -2; unlike most models, the conversion of vacuum energy to radiation ('reheating') is moderately efficient. The scalar field responsible for inflation is a standard-model singlet, develops a vacuum expectation value of 4 x 10 exp 6 GeV, has a mass of about 1 GeV, and can play a role in electroweak phenomena. We also discuss models where the energy scale of inflation is somewhat larger, but still well below the unification scale.

  11. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds

    NASA Astrophysics Data System (ADS)

    Martínez-Torres, David; Miranda, Eva

    2018-01-01

    We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.

  12. Modelling inflation in transportation, comunication and financial services using B-Spline time series model

    NASA Astrophysics Data System (ADS)

    Suparti; Prahutama, Alan; Santoso, Rukun

    2018-05-01

    Inflation is an increase in the price of goods and services in general where the goods and services are the basic needs of society or the decline of the selling power of a country’s currency. Significant inflationary increases occurred in 2013. This increase was contributed by a significant increase in some inflation sectors / groups i.e transportation, communication and financial services; the foodstuff sector, and the housing, water, electricity, gas and fuel sectors. However, significant contributions occurred in the transportation, communications and financial services sectors. In the model of IFIs in the transportation, communication and financial services sector use the B-Spline time series approach, where the predictor variable is Yt, whereas the predictor is a significant lag (in this case Yt-1). In modeling B-spline time series determined the order and the optimum knot point. Optimum knot determination using Generalized Cross Validation (GCV). In inflation modeling for transportation sector, communication and financial services obtained model of B-spline order 2 with 2 points knots produce MAPE less than 50%.

  13. Weather variability and the incidence of cryptosporidiosis: comparison of time series poisson regression and SARIMA models.

    PubMed

    Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des

    2007-09-01

    Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.

  14. Using the Gamma-Poisson Model to Predict Library Circulations.

    ERIC Educational Resources Information Center

    Burrell, Quentin L.

    1990-01-01

    Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)

  15. Starobinsky-like inflation and neutrino masses in a no-scale SO(10) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, John; Theoretical Physics Department, CERN,CH-1211 Geneva 23; Garcia, Marcos A.G.

    2016-11-08

    Using a no-scale supergravity framework, we construct an SO(10) model that makes predictions for cosmic microwave background observables similar to those of the Starobinsky model of inflation, and incorporates a double-seesaw model for neutrino masses consistent with oscillation experiments and late-time cosmology. We pay particular attention to the behaviour of the scalar fields during inflation and the subsequent reheating.

  16. Electronic health record analysis via deep poisson factor models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henao, Ricardo; Lu, James T.; Lucas, Joseph E.

    Electronic Health Record (EHR) phenotyping utilizes patient data captured through normal medical practice, to identify features that may represent computational medical phenotypes. These features may be used to identify at-risk patients and improve prediction of patient morbidity and mortality. We present a novel deep multi-modality architecture for EHR analysis (applicable to joint analysis of multiple forms of EHR data), based on Poisson Factor Analysis (PFA) modules. Each modality, composed of observed counts, is represented as a Poisson distribution, parameterized in terms of hidden binary units. In-formation from different modalities is shared via a deep hierarchy of common hidden units. Activationmore » of these binary units occurs with probability characterized as Bernoulli-Poisson link functions, instead of more traditional logistic link functions. In addition, we demon-strate that PFA modules can be adapted to discriminative modalities. To compute model parameters, we derive efficient Markov Chain Monte Carlo (MCMC) inference that scales efficiently, with significant computational gains when compared to related models based on logistic link functions. To explore the utility of these models, we apply them to a subset of patients from the Duke-Durham patient cohort. We identified a cohort of over 12,000 patients with Type 2 Diabetes Mellitus (T2DM) based on diagnosis codes and laboratory tests out of our patient population of over 240,000. Examining the common hidden units uniting the PFA modules, we identify patient features that represent medical concepts. Experiments indicate that our learned features are better able to predict mortality and morbidity than clinical features identified previously in a large-scale clinical trial.« less

  17. Electronic health record analysis via deep poisson factor models

    DOE PAGES

    Henao, Ricardo; Lu, James T.; Lucas, Joseph E.; ...

    2016-01-01

    Electronic Health Record (EHR) phenotyping utilizes patient data captured through normal medical practice, to identify features that may represent computational medical phenotypes. These features may be used to identify at-risk patients and improve prediction of patient morbidity and mortality. We present a novel deep multi-modality architecture for EHR analysis (applicable to joint analysis of multiple forms of EHR data), based on Poisson Factor Analysis (PFA) modules. Each modality, composed of observed counts, is represented as a Poisson distribution, parameterized in terms of hidden binary units. In-formation from different modalities is shared via a deep hierarchy of common hidden units. Activationmore » of these binary units occurs with probability characterized as Bernoulli-Poisson link functions, instead of more traditional logistic link functions. In addition, we demon-strate that PFA modules can be adapted to discriminative modalities. To compute model parameters, we derive efficient Markov Chain Monte Carlo (MCMC) inference that scales efficiently, with significant computational gains when compared to related models based on logistic link functions. To explore the utility of these models, we apply them to a subset of patients from the Duke-Durham patient cohort. We identified a cohort of over 12,000 patients with Type 2 Diabetes Mellitus (T2DM) based on diagnosis codes and laboratory tests out of our patient population of over 240,000. Examining the common hidden units uniting the PFA modules, we identify patient features that represent medical concepts. Experiments indicate that our learned features are better able to predict mortality and morbidity than clinical features identified previously in a large-scale clinical trial.« less

  18. Modeling Inflation Using a Non-Equilibrium Equation of Exchange

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.

    2013-01-01

    Inflation is a change in the prices of goods that takes place without changes in the actual values of those goods. The Equation of Exchange, formulated clearly in a seminal paper by Irving Fisher in 1911, establishes an equilibrium relationship between the price index P (also known as "inflation"), the economy's aggregate output Q (also known as "the real gross domestic product"), the amount of money available for spending M (also known as "the money supply"), and the rate at which money is reused V (also known as "the velocity of circulation of money"). This paper offers first a qualitative discussion of what can cause these factors to change and how those causes might be controlled, then develops a quantitative model of inflation based on a non-equilibrium version of the Equation of Exchange. Causal relationships are different from equations in that the effects of changes in the causal variables take time to play out-often significant amounts of time. In the model described here, wages track prices, but only after a distributed lag. Prices change whenever the money supply, aggregate output, or the velocity of circulation of money change, but only after a distributed lag. Similarly, the money supply depends on the supplies of domestic and foreign money, which depend on the monetary base and a variety of foreign transactions, respectively. The spreading of delays mitigates the shocks of sudden changes to important inputs, but the most important aspect of this model is that delays, which often have dramatic consequences in dynamic systems, are explicitly incorporated.macroeconomics, inflation, equation of exchange, non-equilibrium, Athena Project

  19. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    PubMed

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  20. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    PubMed Central

    2013-01-01

    Background The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Results Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. Conclusions In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. PMID:23442253

  1. The transverse Poisson's ratio of composites.

    NASA Technical Reports Server (NTRS)

    Foye, R. L.

    1972-01-01

    An expression is developed that makes possible the prediction of Poisson's ratio for unidirectional composites with reference to any pair of orthogonal axes that are normal to the direction of the reinforcing fibers. This prediction appears to be a reasonable one in that it follows the trends of the finite element analysis and the bounding estimates, and has the correct limiting value for zero fiber content. It can only be expected to apply to composites containing stiff, circular, isotropic fibers bonded to a soft matrix material.

  2. Soft inflation. [in cosmology

    NASA Technical Reports Server (NTRS)

    Berkin, Andrew L.; Maeda, Kei-Ichi; Yokoyama, Jun'ichi

    1990-01-01

    The cosmology resulting from two coupled scalar fields was studied, one which is either a new inflation or chaotic type inflation, and the other which has an exponentially decaying potential. Such a potential may appear in the conformally transformed frame of generalized Einstein theories like the Jordan-Brans-Dicke theory. The constraints necessary for successful inflation are examined. Conventional GUT models such as SU(5) were found to be compatible with new inflation, while restrictions on the self-coupling constant are significantly loosened for chaotic inflation.

  3. Stochastic modeling for neural spiking events based on fractional superstatistical Poisson process

    NASA Astrophysics Data System (ADS)

    Konno, Hidetoshi; Tamura, Yoshiyasu

    2018-01-01

    In neural spike counting experiments, it is known that there are two main features: (i) the counting number has a fractional power-law growth with time and (ii) the waiting time (i.e., the inter-spike-interval) distribution has a heavy tail. The method of superstatistical Poisson processes (SSPPs) is examined whether these main features are properly modeled. Although various mixed/compound Poisson processes are generated with selecting a suitable distribution of the birth-rate of spiking neurons, only the second feature (ii) can be modeled by the method of SSPPs. Namely, the first one (i) associated with the effect of long-memory cannot be modeled properly. Then, it is shown that the two main features can be modeled successfully by a class of fractional SSPP (FSSPP).

  4. Kähler-driven tribrid inflation

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Nolde, David

    2012-11-01

    We discuss a new class of tribrid inflation models in supergravity, where the shape of the inflaton potential is dominated by effects from the Kähler potential. Tribrid inflation is a variant of hybrid inflation which is particularly suited for connecting inflation with particle physics, since the inflaton can be a D-flat combination of charged fields from the matter sector. In models of tribrid inflation studied so far, the inflaton potential was dominated by either loop corrections or by mixing effects with the waterfall field (as in "pseudosmooth" tribrid inflation). Here we investigate the third possibility, namely that tribrid inflation is dominantly driven by effects from higher-dimensional operators of the Kähler potential. We specify for which superpotential parameters the new regime is realized and show how it can be experimentally distinguished from the other two (loop-driven and "pseudosmooth") regimes.

  5. The Kramers-Kronig relations for usual and anomalous Poisson-Nernst-Planck models.

    PubMed

    Evangelista, Luiz Roberto; Lenzi, Ervin Kaminski; Barbero, Giovanni

    2013-11-20

    The consistency of the frequency response predicted by a class of electrochemical impedance expressions is analytically checked by invoking the Kramers-Kronig (KK) relations. These expressions are obtained in the context of Poisson-Nernst-Planck usual or anomalous diffusional models that satisfy Poisson's equation in a finite length situation. The theoretical results, besides being successful in interpreting experimental data, are also shown to obey the KK relations when these relations are modified accordingly.

  6. Modeling of Aerobrake Ballute Stagnation Point Temperature and Heat Transfer to Inflation Gas

    NASA Technical Reports Server (NTRS)

    Bahrami, Parviz A.

    2012-01-01

    A trailing Ballute drag device concept for spacecraft aerocapture is considered. A thermal model for calculation of the Ballute membrane temperature and the inflation gas temperature is developed. An algorithm capturing the most salient features of the concept is implemented. In conjunction with the thermal model, trajectory calculations for two candidate missions, Titan Explorer and Neptune Orbiter missions, are used to estimate the stagnation point temperature and the inflation gas temperature. Radiation from both sides of the membrane at the stagnation point and conduction to the inflating gas is included. The results showed that the radiation from the membrane and to a much lesser extent conduction to the inflating gas, are likely to be the controlling heat transfer mechanisms and that the increase in gas temperature due to aerodynamic heating is of secondary importance.

  7. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  8. Studying Resist Stochastics with the Multivariate Poisson Propagation Model

    DOE PAGES

    Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; ...

    2014-01-01

    Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.

  9. No-scale inflation

    NASA Astrophysics Data System (ADS)

    Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.

    2016-05-01

    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on R+{R}2 gravity, with a tilted spectrum of scalar perturbations: {n}s∼ 0.96, and small values of the tensor-to-scalar perturbation ratio r\\lt 0.1, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  10. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

    PubMed

    Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

    2016-10-01

    Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Poisson's spot and Gouy phase

    NASA Astrophysics Data System (ADS)

    da Paz, I. G.; Soldati, Rodolfo; Cabral, L. A.; de Oliveira, J. G. G.; Sampaio, Marcos

    2016-12-01

    Recently there have been experimental results on Poisson spot matter-wave interferometry followed by theoretical models describing the relative importance of the wave and particle behaviors for the phenomenon. We propose an analytical theoretical model for Poisson's spot with matter waves based on the Babinet principle, in which we use the results for free propagation and single-slit diffraction. We take into account effects of loss of coherence and finite detection area using the propagator for a quantum particle interacting with an environment. We observe that the matter-wave Gouy phase plays a role in the existence of the central peak and thus corroborates the predominantly wavelike character of the Poisson's spot. Our model shows remarkable agreement with the experimental data for deuterium (D2) molecules.

  12. Gauge fields and inflation

    NASA Astrophysics Data System (ADS)

    Maleknejad, A.; Sheikh-Jabbari, M. M.; Soda, J.

    2013-07-01

    The isotropy and homogeneity of the cosmic microwave background (CMB) favors “scalar driven” early Universe inflationary models. However, gauge fields and other non-scalar fields are far more common at all energy scales, in particular at high energies seemingly relevant to inflation models. Hence, in this review we consider the role and consequences, theoretical and observational, that gauge fields can have during the inflationary era. Gauge fields may be turned on in the background during inflation, or may become relevant at the level of cosmic perturbations. There have been two main classes of models with gauge fields in the background, models which show violation of the cosmic no-hair theorem and those which lead to isotropic FLRW cosmology, respecting the cosmic no-hair theorem. Models in which gauge fields are only turned on at the cosmic perturbation level, may source primordial magnetic fields. We also review specific observational features of these models on the CMB and/or the primordial cosmic magnetic fields. Our discussions will be mainly focused on the inflation period, with only a brief discussion on the post inflationary (p)reheating era. Large field models: The initial value of the inflaton field is large, generically super-Planckian, and it rolls slowly down toward the potential minimum at smaller φ values. For instance, chaotic inflation is one of the representative models of this class. The typical potential of large-field models has a monomial form as V(φ)=V0φn. A simple analysis using the dynamical equations reveals that for number of e-folds Ne larger than 60, we require super-Planckian initial field values,5φ0>3M. For these models typically ɛ˜η˜Ne-1. Small field models: Inflaton field is initially small and slowly evolves toward the potential minimum at larger φ values. The small field models are characterized by the following potential V(φ)=V0(1-(), which corresponds to a Taylor expansion about the origin, but more realistic

  13. First-order inflation. [in cosmology

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.

    1992-01-01

    I discuss the most recent model of inflation. In first-order inflation the inflationary epoch is associated with a first-order phase transition, with the most likely candidate being GUT symmetry breaking. The transition from the false-vacuum inflationary phase to the true-vacuum radiation-dominated phase proceeds through the nucleation and percolation of true-vacuum bubbles. The first successful and simplest model of first-order inflation, extended inflation, is discussed in some detail: evolution of the cosmic-scale factor, reheating, density perturbations, and the production of gravitational waves both from quantum fluctuations and bubble collisions. Particular attention is paid to the most critical issue in any model of first-order inflation: the requirements on the nucleation rate to ensure a graceful transition from the inflationary phase to the radiation-dominated phase.

  14. Topological inflation with graceful exit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marunović, Anja; Prokopec, Tomislav, E-mail: a.marunovic@uu.nl, E-mail: t.prokopec@uu.nl

    We investigate a class of models of topological inflation in which a super-Hubble-sized global monopole seeds inflation. These models are attractive since inflation starts from rather generic initial conditions, but their not so attractive feature is that, unless symmetry is again restored, inflation never ends. In this work we show that, in presence of another nonminimally coupled scalar field, that is both quadratically and quartically coupled to the Ricci scalar, inflation naturally ends, representing an elegant solution to the graceful exit problem of topological inflation. While the monopole core grows during inflation, the growth stops after inflation, such that themore » monopole eventually enters the Hubble radius, and shrinks to its Minkowski space size, rendering it immaterial for the subsequent Universe's dynamics. Furthermore, we find that our model can produce cosmological perturbations that source CMB temperature fluctuations and seed large scale structure statistically consistent (within one standard deviation) with all available data. In particular, for small and (in our convention) negative nonminimal couplings, the scalar spectral index can be as large as n {sub s} ≅ 0.955, which is about one standard deviation lower than the central value quoted by the most recent Planck Collaboration.« less

  15. Accidental inflation from Kähler uplifting

    NASA Astrophysics Data System (ADS)

    Ben-Dayan, Ido; Jing, Shenglin; Westphal, Alexander; Wieck, Clemens

    2014-03-01

    We analyze the possibility of realizing inflation with a subsequent dS vacuum in the Käahler uplifting scenario. The inclusion of several quantum corrections to the 4d effective action evades previous no-go theorems and allows for construction of simple and successful models of string inflation. The predictions of several benchmark models are in accord with current observations, i.e., a red spectral index, negligible non-gaussianity, and spectral distortions similar to the simplest models of inflation. A particularly interesting subclass of models are ``left-rolling" ones, where the overall volume of the compactified dimensions shrinks during inflation. We call this phenomenon ``inflation by deflation" (IBD), where deflation refers to the internal manifold. This subclass has the appealing features of being insensitive to initial conditions, avoiding the overshooting problem, and allowing for observable running α ~ 0.012 and enhanced tensor-to-scalar ratio r ~ 10-5. The latter results differ significantly from many string inflation models.

  16. Inflation without inflaton: A model for dark energy

    NASA Astrophysics Data System (ADS)

    Falomir, H.; Gamboa, J.; Méndez, F.; Gondolo, P.

    2017-10-01

    The interaction between two initially causally disconnected regions of the Universe is studied using analogies of noncommutative quantum mechanics and the deformation of Poisson manifolds. These causally disconnect regions are governed by two independent Friedmann-Lemaître-Robertson-Walker (FLRW) metrics with scale factors a and b and cosmological constants Λa and Λb, respectively. The causality is turned on by positing a nontrivial Poisson bracket [Pα,Pβ]=ɛα βκ/G , where G is Newton's gravitational constant and κ is a dimensionless parameter. The posited deformed Poisson bracket has an interpretation in terms of 3-cocycles, anomalies, and Poissonian manifolds. The modified FLRW equations acquire an energy-momentum tensor from which we explicitly obtain the equation of state parameter. The modified FLRW equations are solved numerically and the solutions are inflationary or oscillating depending on the values of κ . In this model, the accelerating and decelerating regime may be periodic. The analysis of the equation of state clearly shows the presence of dark energy. By completeness, the perturbative solution for κ ≪1 is also studied.

  17. Planck limits on non-canonical generalizations of large-field inflation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu

    2017-04-01

    In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less

  18. 46 CFR 131.580 - Servicing of inflatable liferafts, inflatable lifejackets, inflatable buoyant apparatus, and...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... lifejackets, inflatable buoyant apparatus, and inflated rescue boats. 131.580 Section 131.580 Shipping COAST... Inspections § 131.580 Servicing of inflatable liferafts, inflatable lifejackets, inflatable buoyant apparatus, and inflated rescue boats. (a) An inflatable liferaft or inflatable buoyant apparatus must be serviced...

  19. 46 CFR 131.580 - Servicing of inflatable liferafts, inflatable lifejackets, inflatable buoyant apparatus, and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... lifejackets, inflatable buoyant apparatus, and inflated rescue boats. 131.580 Section 131.580 Shipping COAST... Inspections § 131.580 Servicing of inflatable liferafts, inflatable lifejackets, inflatable buoyant apparatus, and inflated rescue boats. (a) An inflatable liferaft or inflatable buoyant apparatus must be serviced...

  20. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    NASA Astrophysics Data System (ADS)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  1. A new model for CD8+ T cell memory inflation based upon a recombinant adenoviral vector1

    PubMed Central

    Bolinger, Beatrice; Sims, Stuart; O’Hara, Geraldine; de Lara, Catherine; Tchilian, Elma; Firner, Sonja; Engeler, Daniel; Ludewig, Burkhard; Klenerman, Paul

    2013-01-01

    CD8+ T cell memory inflation, first described in murine cytomegalovirus (MCMV) infection, is characterized by the accumulation of high-frequency, functional antigen-specific CD8+ T cell pools with an effector-memory phenotype and enrichment in peripheral organs. Although persistence of antigen is considered essential, the rules underpinning memory inflation are still unclear. The MCMV model is, however, complicated by the virus’s low-level persistence, and stochastic reactivation. We developed a new model of memory inflation based upon a βgal-recombinant adenovirus vector (Ad-LacZ). After i.v. administration in C57BL/6 mice we observe marked memory inflation in the βgal96 epitope, while a second epitope, βgal497, undergoes classical memory formation. The inflationary T cell responses show kinetics, distribution, phenotype and functions similar to those seen in MCMV and are reproduced using alternative routes of administration. Memory inflation in this model is dependent on MHC Class II. As in MCMV, only the inflating epitope showed immunoproteasome-independence. These data define a new model for memory inflation, which is fully replication-independent, internally controlled and reproduces the key immunologic features of the CD8+ T cell response. This model provides insight into the mechanisms responsible for memory inflation, and since it is based on a vaccine vector, also is relevant to novel T cell-inducing vaccines in humans. PMID:23509359

  2. Bayesian hierarchical modelling of continuous non-negative longitudinal data with a spike at zero: An application to a study of birds visiting gardens in winter.

    PubMed

    Swallow, Ben; Buckland, Stephen T; King, Ruth; Toms, Mike P

    2016-03-01

    The development of methods for dealing with continuous data with a spike at zero has lagged behind those for overdispersed or zero-inflated count data. We consider longitudinal ecological data corresponding to an annual average of 26 weekly maximum counts of birds, and are hence effectively continuous, bounded below by zero but also with a discrete mass at zero. We develop a Bayesian hierarchical Tweedie regression model that can directly accommodate the excess number of zeros common to this type of data, whilst accounting for both spatial and temporal correlation. Implementation of the model is conducted in a Markov chain Monte Carlo (MCMC) framework, using reversible jump MCMC to explore uncertainty across both parameter and model spaces. This regression modelling framework is very flexible and removes the need to make strong assumptions about mean-variance relationships a priori. It can also directly account for the spike at zero, whilst being easily applicable to other types of data and other model formulations. Whilst a correlative study such as this cannot prove causation, our results suggest that an increase in an avian predator may have led to an overall decrease in the number of one of its prey species visiting garden feeding stations in the United Kingdom. This may reflect a change in behaviour of house sparrows to avoid feeding stations frequented by sparrowhawks, or a reduction in house sparrow population size as a result of sparrowhawk increase. © 2015 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Bayesian hierarchical modelling of continuous non‐negative longitudinal data with a spike at zero: An application to a study of birds visiting gardens in winter

    PubMed Central

    Buckland, Stephen T.; King, Ruth; Toms, Mike P.

    2015-01-01

    The development of methods for dealing with continuous data with a spike at zero has lagged behind those for overdispersed or zero‐inflated count data. We consider longitudinal ecological data corresponding to an annual average of 26 weekly maximum counts of birds, and are hence effectively continuous, bounded below by zero but also with a discrete mass at zero. We develop a Bayesian hierarchical Tweedie regression model that can directly accommodate the excess number of zeros common to this type of data, whilst accounting for both spatial and temporal correlation. Implementation of the model is conducted in a Markov chain Monte Carlo (MCMC) framework, using reversible jump MCMC to explore uncertainty across both parameter and model spaces. This regression modelling framework is very flexible and removes the need to make strong assumptions about mean‐variance relationships a priori. It can also directly account for the spike at zero, whilst being easily applicable to other types of data and other model formulations. Whilst a correlative study such as this cannot prove causation, our results suggest that an increase in an avian predator may have led to an overall decrease in the number of one of its prey species visiting garden feeding stations in the United Kingdom. This may reflect a change in behaviour of house sparrows to avoid feeding stations frequented by sparrowhawks, or a reduction in house sparrow population size as a result of sparrowhawk increase. PMID:25737026

  4. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals

    NASA Astrophysics Data System (ADS)

    Frejlich, Pedro; Mărcuț, Ioan

    2018-03-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  5. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.

    PubMed

    Frejlich, Pedro; Mărcuț, Ioan

    2018-01-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  6. Modular invariant inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Tatsuo; Nitta, Daisuke; Urakawa, Yuko

    2016-08-08

    Modular invariance is a striking symmetry in string theory, which may keep stringy corrections under control. In this paper, we investigate a phenomenological consequence of the modular invariance, assuming that this symmetry is preserved as well as in a four dimensional (4D) low energy effective field theory. As a concrete setup, we consider a modulus field T whose contribution in the 4D effective field theory remains invariant under the modular transformation and study inflation drived by T. The modular invariance restricts a possible form of the scalar potenntial. As a result, large field models of inflation are hardly realized. Meanwhile,more » a small field model of inflation can be still accomodated in this restricted setup. The scalar potential traced during the slow-roll inflation mimics the hilltop potential V{sub ht}, but it also has a non-negligible deviation from V{sub ht}. Detecting the primordial gravitational waves predicted in this model is rather challenging. Yet, we argue that it may be still possible to falsify this model by combining the information in the reheating process which can be determined self-completely in this setup.« less

  7. Quantum fluctuations and CMB anisotropies in one-bubble open inflation models

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kazuhiro; Sasaki, Misao; Tanaka, Takahiro

    1996-10-01

    We first develop a method to calculate a complete set of mode functions that describe the quantum fluctuations generated in one-bubble open inflation models. We consider two classes of models. One is a single scalar field model proposed by Bucher, Goldhaber, and Turok and by us as an example of the open inflation scenario, and the other is a two-field model such as the ``supernatural'' inflation proposed by Linde and Mezhlumian. In both cases we assume the difference in the vacuum energy density between inside and outside the bubble is negligible. There are two kinds of mode functions. One kind has the usual continuous spectrum and the other has a discrete spectrum with characteristic wavelengths exceeding the spatial curvature scale. The latter can be further divided into two classes in terms of its origin. One is called the de Sitter supercurvature mode, which arises due to the global spacetime structure of de Sitter space, and the other is due to fluctuations of the bubble wall. We calculate the spectrum of quantum fluctuations in these models and evaluate the resulting large angular scale CMB anisotropies. We find there are ranges of model parameters that are consistent with observed CMB anisotropies.

  8. Testing Cosmic Inflation

    NASA Technical Reports Server (NTRS)

    Chuss, David

    2010-01-01

    The Cosmic Microwave Background (CMB) has provided a wealth of information about the history and physics of the early Universe. Much progress has been made on uncovering the emerging Standard Model of Cosmology by such experiments as COBE and WMAP, and ESA's Planck Surveyor will likely increase our knowledge even more. Despite the success of this model, mysteries remain. Currently understood physics does not offer a compelling explanation for the homogeneity, flatness, and the origin of structure in the Universe. Cosmic Inflation, a brief epoch of exponential expansion, has been posted to explain these observations. If inflation is a reality, it is expected to produce a background spectrum of gravitational waves that will leave a small polarized imprint on the CMB. Discovery of this signal would give the first direct evidence for inflation and provide a window into physics at scales beyond those accessible to terrestrial particle accelerators. I will briefly review aspects of the Standard Model of Cosmology and discuss our current efforts to design and deploy experiments to measure the polarization of the CMB with the precision required to test inflation.

  9. k-essence model of inflation, dark matter, and dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Nilok; Majumdar, A. S.

    We investigate the possibility for k-essence dynamics to reproduce the primary features of inflation in the early universe, generate dark matter subsequently, and finally account for the presently observed acceleration. We first show that for a purely kinetic k-essence model the late-time energy density of the universe when expressed simply as a sum of a cosmological constant and a dark matter term leads to a static universe. We then study another k-essence model in which the Lagrangian contains a potential for the scalar field as well as a noncanonical kinetic term. We show that such a model generates the basicmore » features of inflation in the early universe, and also gives rise to dark matter and dark energy at appropriate subsequent stages. Observational constraints on the parameters of this model are obtained.« less

  10. Inflection-point inflation in a hyper-charge oriented U ( 1 ) X model

    DOE PAGES

    Okada, Nobuchika; Okada, Satomi; Raut, Digesh

    2017-03-31

    Inflection-point inflation is an interesting possibility to realize a successful slow-roll inflation when inflation is driven by a single scalar field with its value during inflation below the Planck mass (ΦI≲M Pl). In order for a renormalization group (RG) improved effective λΦ 4 potential to develop an inflection-point, the running quartic coupling λ(Φ) must exhibit a minimum with an almost vanishing value in its RG evolution, namely λ(Φ I)≃0 and β λ(ΦI)≃0, where β λ is the beta-function of the quartic coupling. Here in this paper, we consider the inflection-point inflation in the context of the minimal gauged U(1) Xmore » extended Standard Model (SM), which is a generalization of the minimal U(1) B$-$L model, and is constructed as a linear combination of the SM U(1) Y and U(1) B$-$L gauge symmetries. We identify the U(1) X Higgs field with the inflaton field. For a successful inflection-point inflation to be consistent with the current cosmological observations, the mass ratios among the U(1) X gauge boson, the right-handed neutrinos and the U(1) X Higgs boson are fixed. Focusing on the case that the extra U(1) X gauge symmetry is mostly aligned along the SM U(1) Y direction, we investigate a consistency between the inflationary predictions and the latest LHC Run-2 results on the search for a narrow resonance with the di-lepton final state.« less

  11. Inflection-point inflation in a hyper-charge oriented U ( 1 ) X model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, Nobuchika; Okada, Satomi; Raut, Digesh

    Inflection-point inflation is an interesting possibility to realize a successful slow-roll inflation when inflation is driven by a single scalar field with its value during inflation below the Planck mass (ΦI≲M Pl). In order for a renormalization group (RG) improved effective λΦ 4 potential to develop an inflection-point, the running quartic coupling λ(Φ) must exhibit a minimum with an almost vanishing value in its RG evolution, namely λ(Φ I)≃0 and β λ(ΦI)≃0, where β λ is the beta-function of the quartic coupling. Here in this paper, we consider the inflection-point inflation in the context of the minimal gauged U(1) Xmore » extended Standard Model (SM), which is a generalization of the minimal U(1) B$-$L model, and is constructed as a linear combination of the SM U(1) Y and U(1) B$-$L gauge symmetries. We identify the U(1) X Higgs field with the inflaton field. For a successful inflection-point inflation to be consistent with the current cosmological observations, the mass ratios among the U(1) X gauge boson, the right-handed neutrinos and the U(1) X Higgs boson are fixed. Focusing on the case that the extra U(1) X gauge symmetry is mostly aligned along the SM U(1) Y direction, we investigate a consistency between the inflationary predictions and the latest LHC Run-2 results on the search for a narrow resonance with the di-lepton final state.« less

  12. Accidental inflation from Kähler uplifting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Dayan, Ido; Westphal, Alexander; Wieck, Clemens

    2014-03-01

    We analyze the possibility of realizing inflation with a subsequent dS vacuum in the Käahler uplifting scenario. The inclusion of several quantum corrections to the 4d effective action evades previous no-go theorems and allows for construction of simple and successful models of string inflation. The predictions of several benchmark models are in accord with current observations, i.e., a red spectral index, negligible non-gaussianity, and spectral distortions similar to the simplest models of inflation. A particularly interesting subclass of models are ''left-rolling'' ones, where the overall volume of the compactified dimensions shrinks during inflation. We call this phenomenon ''inflation by deflation''more » (IBD), where deflation refers to the internal manifold. This subclass has the appealing features of being insensitive to initial conditions, avoiding the overshooting problem, and allowing for observable running α ∼ 0.012 and enhanced tensor-to-scalar ratio r ∼ 10{sup −5}. The latter results differ significantly from many string inflation models.« less

  13. Primordial black holes from single field models of inflation

    NASA Astrophysics Data System (ADS)

    García-Bellido, Juan; Ruiz Morales, Ester

    2017-12-01

    Primordial black holes (PBH) have been shown to arise from high peaks in the matter power spectra of multi-field models of inflation. Here we show, with a simple toy model, that it is also possible to generate a peak in the curvature power spectrum of single-field inflation. We assume that the effective dynamics of the inflaton field presents a near-inflection point which slows down the field right before the end of inflation and gives rise to a prominent spike in the fluctuation power spectrum at scales much smaller than those probed by Cosmic Microwave Background (CMB) and Large Scale Structure (LSS) observations. This peak will give rise, upon reentry during the radiation era, to PBH via gravitational collapse. The mass and abundance of these PBH is such that they could constitute the totality of the Dark Matter today. We satisfy all CMB and LSS constraints and predict a very broad range of PBH masses. Some of these PBH are light enough that they will evaporate before structure formation, leaving behind a large curvature fluctuation on small scales. This broad mass distribution of PBH as Dark Matter will be tested in the future by AdvLIGO and LISA interferometers.

  14. The emergence of gravity as a retro-causal post-inflation macro-quantum-coherent holographic vacuum Higgs-Goldstone field

    NASA Astrophysics Data System (ADS)

    Sarfatti, Jack; Levit, Creon

    2009-06-01

    We present a model for the origin of gravity, dark energy and dark matter: Dark energy and dark matter are residual pre-inflation false vacuum random zero point energy (w = - 1) of large-scale negative, and short-scale positive pressure, respectively, corresponding to the "zero point" (incoherent) component of a superfluid (supersolid) ground state. Gravity, in contrast, arises from the 2nd order topological defects in the post-inflation virtual "condensate" (coherent) component. We predict, as a consequence, that the LHC will never detect exotic real on-mass-shell particles that can explain dark matter ΩMDM approx 0.23. We also point out that the future holographic dark energy de Sitter horizon is a total absorber (in the sense of retro-causal Wheeler-Feynman action-at-a-distance electrodynamics) because it is an infinite redshift surface for static detectors. Therefore, the advanced Hawking-Unruh thermal radiation from the future de Sitter horizon is a candidate for the negative pressure dark vacuum energy.

  15. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  16. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

    ERIC Educational Resources Information Center

    Baschera, Gian-Marco; Gross, Markus

    2010-01-01

    We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

  17. Extended nonlinear feedback model for describing episodes of high inflation

    NASA Astrophysics Data System (ADS)

    Szybisz, Martín A.; Szybisz, Leszek

    2017-01-01

    An extension of the nonlinear feedback (NLF) formalism to describe regimes of hyper- and high-inflation in economy is proposed in the present work. In the NLF model the consumer price index (CPI) exhibits a finite time singularity of the type 1 /(tc - t) (1 - β) / β, with β > 0, predicting a blow up of the economy at a critical time tc. However, this model fails in determining tc in the case of weak hyperinflation regimes like, e.g., that occurred in Israel. To overcome this trouble, the NLF model is extended by introducing a parameter γ, which multiplies all terms with past growth rate index (GRI). In this novel approach the solution for CPI is also analytic being proportional to the Gaussian hypergeometric function 2F1(1 / β , 1 / β , 1 + 1 / β ; z) , where z is a function of β, γ, and tc. For z → 1 this hypergeometric function diverges leading to a finite time singularity, from which a value of tc can be determined. This singularity is also present in GRI. It is shown that the interplay between parameters β and γ may produce phenomena of multiple equilibria. An analysis of the severe hyperinflation occurred in Hungary proves that the novel model is robust. When this model is used for examining data of Israel a reasonable tc is got. High-inflation regimes in Mexico and Iceland, which exhibit weaker inflations than that of Israel, are also successfully described.

  18. Nonthermal gravitino production in tribrid inflation

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Dutta, Koushik

    2015-10-01

    We investigate nonthermal gravitino production after tribrid inflation in supergravity, which is a variant of supersymmetric hybrid inflation where three fields are involved in the inflationary model and where the inflaton field resides in the matter sector of the theory. In contrast to conventional supersymmetric hybrid inflation, where nonthermal gravitino production imposes severe constraints on the inflationary model, we find that the "nonthermal gravitino problem" is generically absent in models of tribrid inflation, mainly due to two effects: (i) With the inflaton in tribrid inflation (after inflation) being lighter than the waterfall field, the latter has a second decay channel with a much larger rate than for the decay into gravitinos. This reduces the branching ratio for the decay of the waterfall field into gravitinos. (ii) The inflaton generically decays later than the waterfall field, and it does not produce gravitinos when it decays. This leads to a dilution of the gravitino population from the decays of the waterfall field. The combination of both effects generically leads to a strongly reduced gravitino production in tribrid inflation.

  19. Quasi-neutral limit of Euler–Poisson system of compressible fluids coupled to a magnetic field

    NASA Astrophysics Data System (ADS)

    Yang, Jianwei

    2018-06-01

    In this paper, we consider the quasi-neutral limit of a three-dimensional Euler-Poisson system of compressible fluids coupled to a magnetic field. We prove that, as Debye length tends to zero, periodic initial-value problems of the model have unique smooth solutions existing in the time interval where the ideal incompressible magnetohydrodynamic equations has smooth solution. Meanwhile, it is proved that smooth solutions converge to solutions of incompressible magnetohydrodynamic equations with a sharp convergence rate in the process of quasi-neutral limit.

  20. 46 CFR 131.580 - Servicing of inflatable liferafts, inflatable lifejackets, inflatable buoyant apparatus, and...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... lifejackets, inflatable buoyant apparatus, and inflated rescue boats. 131.580 Section 131.580 Shipping COAST..., and inflated rescue boats. (a) An inflatable liferaft or inflatable buoyant apparatus must be serviced... maintenance of inflatable rescue boats must follow the manufacturers' instructions. Each repair, except an...

  1. 46 CFR 131.580 - Servicing of inflatable liferafts, inflatable lifejackets, inflatable buoyant apparatus, and...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... lifejackets, inflatable buoyant apparatus, and inflated rescue boats. 131.580 Section 131.580 Shipping COAST..., and inflated rescue boats. (a) An inflatable liferaft or inflatable buoyant apparatus must be serviced... maintenance of inflatable rescue boats must follow the manufacturers' instructions. Each repair, except an...

  2. 46 CFR 131.580 - Servicing of inflatable liferafts, inflatable lifejackets, inflatable buoyant apparatus, and...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... lifejackets, inflatable buoyant apparatus, and inflated rescue boats. 131.580 Section 131.580 Shipping COAST..., and inflated rescue boats. (a) An inflatable liferaft or inflatable buoyant apparatus must be serviced... maintenance of inflatable rescue boats must follow the manufacturers' instructions. Each repair, except an...

  3. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  4. Effects of Variable Inflationary Conditions on AN Inventory Model with Inflation-Proportional Demand Rate

    NASA Astrophysics Data System (ADS)

    Mirzazadeh, Abolfazl

    2009-08-01

    The inflation rate in the most of the previous researches has been considered constant and well-known over the time horizon, although the future rate of inflation is inherently uncertain and unstable, and is difficult to predict it accurately. Therefore, A time varying inventory model for deteriorating items with allowable shortages is developed in this paper. The inflation rates (internal and external) are time-dependent and demand rate is inflation-proportional. The inventory level is described by differential equations over the time horizon and present value method is used. The numerical example is given to explain the results. Some particular cases, which follow the main problem, will discuss and the results will compare with the main model by using the numerical examples. It has been achieved which shortages increases considerably in comparison with the case of without variable inflationary conditions.

  5. Symmetry breaking patterns for inflation

    NASA Astrophysics Data System (ADS)

    Klein, Remko; Roest, Diederik; Stefanyszyn, David

    2018-06-01

    We study inflationary models where the kinetic sector of the theory has a non-linearly realised symmetry which is broken by the inflationary potential. We distinguish between kinetic symmetries which non-linearly realise an internal or space-time group, and which yield a flat or curved scalar manifold. This classification leads to well-known inflationary models such as monomial inflation and α-attractors, as well as a new model based on fixed couplings between a dilaton and many axions which non-linearly realises higher-dimensional conformal symmetries. In this model, inflation can be realised along the dilatonic direction, leading to a tensor-to-scalar ratio r ˜ 0 .01 and a spectral index n s ˜ 0 .975. We refer to the new model as ambient inflation since inflation proceeds along an isometry of an anti-de Sitter ambient space-time, which fully determines the kinetic sector.

  6. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.

  7. The Role of Depressive Symptoms, Family Invalidation and Behavioral Impulsivity in the Occurrence and Repetition of Non-Suicidal Self-Injury in Chinese Adolescents: A 2-Year Follow-Up Study

    ERIC Educational Resources Information Center

    You, Jianing; Leung, Freedom

    2012-01-01

    This study used zero-inflated poisson regression analysis to examine the role of depressive symptoms, family invalidation, and behavioral impulsivity in the occurrence and repetition of non-suicidal self-injury among Chinese community adolescents over a 2-year period. Participants, 4782 high school students, were assessed twice during the…

  8. Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca

    2012-07-15

    Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less

  9. 46 CFR 122.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... apparatus, inflatable life jackets, and inflated rescue boats. 122.730 Section 122.730 Shipping COAST GUARD..., inflatable life jackets, and inflated rescue boats. (a) An inflatable liferaft or inflatable buoyant... other standard specified by the Commandant. (e) Repair and maintenance of inflated rescue boats must be...

  10. 46 CFR 122.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... apparatus, inflatable life jackets, and inflated rescue boats. 122.730 Section 122.730 Shipping COAST GUARD..., inflatable life jackets, and inflated rescue boats. (a) An inflatable liferaft or inflatable buoyant... other standard specified by the Commandant. (e) Repair and maintenance of inflated rescue boats must be...

  11. 46 CFR 122.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... apparatus, inflatable life jackets, and inflated rescue boats. 122.730 Section 122.730 Shipping COAST GUARD..., inflatable life jackets, and inflated rescue boats. (a) An inflatable liferaft or inflatable buoyant... other standard specified by the Commandant. (e) Repair and maintenance of inflated rescue boats must be...

  12. Impacts of supersymmetric higher derivative terms on inflation models in supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aoki, Shuntaro; Yamada, Yusuke, E-mail: shun-soccer@akane.waseda.jp, E-mail: yuusuke-yamada@asagi.waseda.jp

    2015-07-01

    We show the effects of supersymmetric higher derivative terms on inflation models in supergravity. The results show that such terms generically modify the effective kinetic coefficient of the inflaton during inflation if the cut off scale of the higher derivative operators is sufficiently small. In such a case, the η-problem in supergravity does not occur, and we find that the effective potential of the inflaton generically becomes a power type potential with a power smaller than two.

  13. Thermalized axion inflation: Natural and monomial inflation with small r

    NASA Astrophysics Data System (ADS)

    Ferreira, Ricardo Z.; Notari, Alessio

    2018-03-01

    A safe way to reheat the Universe, in models of natural and quadratic inflation, is through shift symmetric couplings between the inflaton ϕ and the Standard Model (SM), since they do not generate loop corrections to the potential V (ϕ ). We consider such a coupling to SM gauge fields, of the form ϕ F F ˜/f , with sub-Planckian f . In this case, gauge fields can be exponentially produced already during inflation and thermalize via interactions with charged particles, as pointed out in previous work. This can lead to a plasma of temperature T during inflation, and the thermal masses g T of the gauge bosons can equilibrate the system. In addition, inflaton perturbations δ ϕ can also have a thermal spectrum if they have sufficiently large cross sections with the plasma. In this case, inflationary predictions are strongly modified: (1) scalar perturbations are thermal, and so enhanced over the vacuum, leading to a generic way to suppress the tensor-to-scalar ratio r ; (2) the spectral index is ns-1 =η -4 ɛ . After presenting the relevant conditions for thermalization, we show that thermalized natural and monomial models of inflation agree with present observations and have r ≈10-3-10-2, which is within reach of next generation CMB experiments.

  14. Alchemical inflation: inflaton turns into Higgs

    NASA Astrophysics Data System (ADS)

    Nakayama, Kazunori; Takahashi, Fuminobu

    2012-11-01

    We propose a new inflation model in which a gauge singlet inflaton turns into the Higgs condensate after inflation. The inflationary path is characterized by a moduli space of supersymmetric vacua spanned by the inflaton and Higgs field. The inflation energy scale is related to the soft supersymmetry breaking, and the Hubble parameter during inflation is smaller than the gravitino mass. The initial condition for the successful inflation is naturally realized by the pre-inflation in which the Higgs plays a role of the waterfall field.

  15. NON-HOMOGENEOUS POISSON PROCESS MODEL FOR GENETIC CROSSOVER INTERFERENCE.

    PubMed

    Leu, Szu-Yun; Sen, Pranab K

    2014-01-01

    The genetic crossover interference is usually modeled with a stationary renewal process to construct the genetic map. We propose two non-homogeneous, also dependent, Poisson process models applied to the known physical map. The crossover process is assumed to start from an origin and to occur sequentially along the chromosome. The increment rate depends on the position of the markers and the number of crossover events occurring between the origin and the markers. We show how to obtain parameter estimates for the process and use simulation studies and real Drosophila data to examine the performance of the proposed models.

  16. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  17. Poisson process stimulation of an excitable membrane cable model.

    PubMed Central

    Goldfinger, M D

    1986-01-01

    The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505

  18. Protection from annual flooding is correlated with increased cholera prevalence in Bangladesh: a zero-inflated regression analysis.

    PubMed

    Carrel, Margaret; Voss, Paul; Streatfield, Peter K; Yunus, Mohammad; Emch, Michael

    2010-03-22

    Alteration of natural or historical aquatic flows can have unintended consequences for regions where waterborne diseases are endemic and where the epidemiologic implications of such change are poorly understood. The implementation of flood protection measures for a portion of an intensely monitored population in Matlab, Bangladesh, allows us to examine whether cholera outcomes respond positively or negatively to measures designed to control river flooding. Using a zero inflated negative binomial model, we examine how selected covariates can simultaneously account for household clusters reporting no cholera from those with positive counts as well as distinguishing residential areas with low counts from areas with high cholera counts. Our goal is to examine how residence within or outside a flood protected area interacts with the probability of cholera presence and the effect of flood protection on the magnitude of cholera prevalence. In Matlab, living in a household that is protected from annual monsoon flooding appears to have no significant effect on whether the household experiences cholera, net of other covariates. However, counter-intuitively, among households where cholera is reported, living within the flood protected region significantly increases the number of cholera cases. The construction of dams or other water impoundment strategies for economic or social motives can have profound and unanticipated consequences for waterborne disease. Our results indicate that the construction of a flood control structure in rural Bangladesh is correlated with an increase in cholera cases for residents protected from annual monsoon flooding. Such a finding requires attention from both the health community and from governments and non-governmental organizations involved in ongoing water management schemes.

  19. Topological defects in extended inflation

    NASA Technical Reports Server (NTRS)

    Copeland, Edmund J.; Kolb, Edward W.; Liddle, Andrew R.

    1990-01-01

    The production of topological defects, especially cosmic strings, in extended inflation models was considered. In extended inflation, the Universe passes through a first-order phase transition via bubble percolation, which naturally allows defects to form at the end of inflation. The correlation length, which determines the number density of the defects, is related to the mean size of bubbles when they collide. This mechanism allows a natural combination of inflation and large scale structure via cosmic strings.

  20. 46 CFR 185.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... apparatus, inflatable life jackets, and inflated rescue boats. 185.730 Section 185.730 Shipping COAST GUARD... liferafts, inflatable buoyant apparatus, inflatable life jackets, and inflated rescue boats. (a) An... inflated rescue boats must be in accordance with the manufacturer's instructions. All repairs must be made...

  1. 46 CFR 185.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... apparatus, inflatable life jackets, and inflated rescue boats. 185.730 Section 185.730 Shipping COAST GUARD... liferafts, inflatable buoyant apparatus, inflatable life jackets, and inflated rescue boats. (a) An... inflated rescue boats must be in accordance with the manufacturer's instructions. All repairs must be made...

  2. 46 CFR 185.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... apparatus, inflatable life jackets, and inflated rescue boats. 185.730 Section 185.730 Shipping COAST GUARD... liferafts, inflatable buoyant apparatus, inflatable life jackets, and inflated rescue boats. (a) An... inflated rescue boats must be in accordance with the manufacturer's instructions. All repairs must be made...

  3. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  4. Derivation of Poisson and Nernst-Planck equations in a bath and channel from a molecular model.

    PubMed

    Schuss, Z; Nadler, B; Eisenberg, R S

    2001-09-01

    Permeation of ions from one electrolytic solution to another, through a protein channel, is a biological process of considerable importance. Permeation occurs on a time scale of micro- to milliseconds, far longer than the femtosecond time scales of atomic motion. Direct simulations of atomic dynamics are not yet possible for such long-time scales; thus, averaging is unavoidable. The question is what and how to average. In this paper, we average a Langevin model of ionic motion in a bulk solution and protein channel. The main result is a coupled system of averaged Poisson and Nernst-Planck equations (CPNP) involving conditional and unconditional charge densities and conditional potentials. The resulting NP equations contain the averaged force on a single ion, which is the sum of two components. The first component is the gradient of a conditional electric potential that is the solution of Poisson's equation with conditional and permanent charge densities and boundary conditions of the applied voltage. The second component is the self-induced force on an ion due to surface charges induced only by that ion at dielectric interfaces. The ion induces surface polarization charge that exerts a significant force on the ion itself, not present in earlier PNP equations. The proposed CPNP system is not complete, however, because the electric potential satisfies Poisson's equation with conditional charge densities, conditioned on the location of an ion, while the NP equations contain unconditional densities. The conditional densities are closely related to the well-studied pair-correlation functions of equilibrium statistical mechanics. We examine a specific closure relation, which on the one hand replaces the conditional charge densities by the unconditional ones in the Poisson equation, and on the other hand replaces the self-induced force in the NP equation by an effective self-induced force. This effective self-induced force is nearly zero in the baths but is approximately

  5. Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.

    PubMed

    Chen, Duan

    2017-11-01

    In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.

  6. Macro-level pedestrian and bicycle crash analysis: Incorporating spatial spillover effects in dual state count models.

    PubMed

    Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed

    2016-08-01

    This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    PubMed

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  8. An economic order quantity model with shortage and inflation

    NASA Astrophysics Data System (ADS)

    Wulan, Elis Ratna; Nurjaman, Wildan

    2015-09-01

    The effect of inflation has become a persistent characteristic and more significant problem of many developing economies especially in the third world countries. While making effort to achieve optimal quantity of product to be produced or purchased using the simplest and on the shelf classical EOQ model, the non-inclusion of conflicting economic realities as shortage and inflation has rendered its result quite uneconomical and hence the purpose for this study. Mathematical expression was developed for each of the cost components the sum of which become the total inventory model over the period (0,L) ((TIC(0,L)). L is planning horizon and TIC(0,L) is total inventory cost over a period of (0,L). Significant savings with increase in quantity was achieved based on deference in the varying price regime. With the assumptions considered and subject to the availability of reliable inventory cost element, the developed model is found to produce a feasible, and economic inventory stock-level with the numerical example of a material supply of a manufacturing company.

  9. Primordial perturbations in multi-scalar inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abedi, Habib; Abbassi, Amir M., E-mail: h.abedi@ut.ac.ir, E-mail: amabasi@khayam.ut.ac.ir

    2017-07-01

    Multiple field models of inflation exhibit new features than single field models. In this work, we study the hierarchy of parameters based on Hubble expansion rate in curved field space and derive the system of flow equations that describe their evolutions. Then we focus on obtaining derivatives of number of e-folds with respect to scalar fields during inflation and at hypersurface of the end of inflation.

  10. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  11. Seven lessons from manyfield inflation in random potentials

    NASA Astrophysics Data System (ADS)

    Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David

    2018-01-01

    We study inflation in models with many interacting fields subject to randomly generated scalar potentials. We use methods from non-equilibrium random matrix theory to construct the potentials and an adaption of the `transport method' to evolve the two-point correlators during inflation. This construction allows, for the first time, for an explicit study of models with up to 100 interacting fields supporting a period of `approximately saddle-point' inflation. We determine the statistical predictions for observables by generating over 30,000 models with 2–100 fields supporting at least 60 efolds of inflation. These studies lead us to seven lessons: i) Manyfield inflation is not single-field inflation, ii) The larger the number of fields, the simpler and sharper the predictions, iii) Planck compatibility is not rare, but future experiments may rule out this class of models, iv) The smoother the potentials, the sharper the predictions, v) Hyperparameters can transition from stiff to sloppy, vi) Despite tachyons, isocurvature can decay, vii) Eigenvalue repulsion drives the predictions. We conclude that many of the `generic predictions' of single-field inflation can be emergent features of complex inflation models.

  12. Fibre inflation and α-attractors

    NASA Astrophysics Data System (ADS)

    Kallosh, Renata; Linde, Andrei; Roest, Diederik; Westphal, Alexander; Yamada, Yusuke

    2018-02-01

    Fibre inflation is a specific string theory construction based on the Large Volume Scenario that produces an inflationary plateau. We outline its relation to α-attractor models for inflation, with the cosmological sector originating from certain string theory corrections leading to α = 2 and α = 1/2. Above a certain field range, the steepening effect of higher-order corrections leads first to the breakdown of single-field slow-roll and after that to the onset of 2-field dynamics: the overall volume of the extra dimensions starts to participate in the effective dynamics. Finally, we propose effective supergravity models of fibre inflation based on an \\overline{D3} uplift term with a nilpotent superfield. Specific moduli dependent \\overline{D3} induced geometries lead to cosmological fibre models but have in addition a de Sitter minimum exit. These supergravity models motivated by fibre inflation are relatively simple, stabilize the axions and disentangle the Hubble parameter from supersymmetry breaking.

  13. Coupling Poisson rectangular pulse and multiplicative microcanonical random cascade models to generate sub-daily precipitation timeseries

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph

    2018-07-01

    To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also

  14. No-scale ripple inflation revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tianjun; Li, Zhijin; Nanopoulos, Dimitri V., E-mail: tli@itp.ac.cn, E-mail: lizhijin@physics.tamu.edu, E-mail: dimitri@physics.tamu.edu

    We revisit the no-scale ripple inflation model, where no-scale supergravity is modified by an additional term for the inflaton field in the Kähler potential. This term not only breaks one SU(N,1) symmetry explicitly, but also plays an important role for inflation. We generalize the superpotential in the no-scale ripple inflation model slightly. There exists a discrete Z{sub 2} symmetry/parity in the scalar potential in general, which can be preserved or violated by the non-canonical nomalized inflaton kinetic term. Thus, there are three inflation paths: one parity invariant path, and the left and right paths for parity violating scenario. We showmore » that the inflations along the parity invariant path and right path are consistent with the Planck results. However, the gavitino mass for the parity invariant path is so large that the inflation results will be invalid if we consider the inflaton supersymmetry breaking soft mass term. Thus, only the inflation along the right path gives the correct and consistent results. Notably, the tensor-to-scalar ratio in such case can be large, with a value around 0.05, which may be probed by the future Planck experiment.« less

  15. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  16. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  17. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  18. Revisiting non-Gaussianity from non-attractor inflation models

    NASA Astrophysics Data System (ADS)

    Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei

    2018-05-01

    Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.

  19. Self-reproduction in k-inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmer, Ferdinand; Winitzki, Sergei

    2006-09-15

    We study cosmological self-reproduction in models of inflation driven by a scalar field {phi} with a noncanonical kinetic term (k-inflation). We develop a general criterion for the existence of attractors and establish conditions selecting a class of k-inflation models that admit a unique attractor solution. We then consider quantum fluctuations on the attractor background. We show that the correlation length of the fluctuations is of order c{sub s}H{sup -1}, where c{sub s} is the speed of sound. By computing the magnitude of field fluctuations, we determine the coefficients of Fokker-Planck equations describing the probability distribution of the spatially averaged fieldmore » {phi}. The field fluctuations are generally large in the inflationary attractor regime; hence, eternal self-reproduction is a generic feature of k-inflation. This is established more formally by demonstrating the existence of stationary solutions of the relevant Fokker-Planck equations. We also show that there exists a (model-dependent) range {phi}{sub R}<{phi}<{phi}{sub max} within which large fluctuations are likely to drive the field towards the upper boundary {phi}={phi}{sub max}, where the semiclassical consideration breaks down. An exit from inflation into reheating without reaching {phi}{sub max} will occur almost surely (with probability 1) only if the initial value of {phi} is below {phi}{sub R}. In this way, strong self-reproduction effects constrain models of k-inflation.« less

  20. Inflation of Unreefed and Reefed Extraction Parachutes

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.; Varela, Jose G.

    2015-01-01

    Data from the Orion and several other test programs have been used to reconstruct inflation parameters for 28 ft Do extraction parachutes as well as the parent aircraft pitch response during extraction. The inflation force generated by extraction parachutes is recorded directly during tow tests but is usually inferred from the payload accelerometer during Low Velocity Airdrop Delivery (LVAD) flight test extractions. Inflation parameters are dependent on the type of parent aircraft, number of canopies, and standard vs. high altitude extraction conditions. For standard altitudes, single canopy inflations are modeled as infinite mass, but the non-symmetric inflations in a cluster are modeled as finite mass. High altitude extractions have necessitated reefing the extraction parachutes, which are best modeled as infinite mass for those conditions. Distributions of aircraft pitch profiles and inflation parameters have been generated for use in Monte Carlo simulations of payload extractions.

  1. Primordial anisotropies in gauged hybrid inflation

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Emami, Razieh; Firouzjahi, Hassan

    2014-05-01

    We study primordial anisotropies generated in the model of gauged hybrid inflation in which the complex waterfall field is charged under a U(1)gauge field. Primordial anisotropies are generated either actively during inflation or from inhomogeneities modulating the surface of end of inflation during waterfall transition. We present a consistent δN mechanism to calculate the anisotropic power spectrum and bispectrum. We show that the primordial anisotropies generated at the surface of end of inflation do not depend on the number of e-folds and therefore do not produce dangerously large anisotropies associated with the IR modes. Furthermore, one can find the parameter space that the anisotropies generated from the surface of end of inflation cancel the anisotropies generated during inflation, therefore relaxing the constrains on model parameters imposed from IR anisotropies. We also show that the gauge field fluctuations induce a red-tilted power spectrum so the averaged power spectrum from the gauge field can change the total power spectrum from blue to red. Therefore, hybrid inflation, once gauged under a U(1) field, can be consistent with the cosmological observations.

  2. The Poisson-Helmholtz-Boltzmann model.

    PubMed

    Bohinc, K; Shrestha, A; May, S

    2011-10-01

    We present a mean-field model of a one-component electrolyte solution where the mobile ions interact not only via Coulomb interactions but also through a repulsive non-electrostatic Yukawa potential. Our choice of the Yukawa potential represents a simple model for solvent-mediated interactions between ions. We employ a local formulation of the mean-field free energy through the use of two auxiliary potentials, an electrostatic and a non-electrostatic potential. Functional minimization of the mean-field free energy leads to two coupled local differential equations, the Poisson-Boltzmann equation and the Helmholtz-Boltzmann equation. Their boundary conditions account for the sources of both the electrostatic and non-electrostatic interactions on the surface of all macroions that reside in the solution. We analyze a specific example, two like-charged planar surfaces with their mobile counterions forming the electrolyte solution. For this system we calculate the pressure between the two surfaces, and we analyze its dependence on the strength of the Yukawa potential and on the non-electrostatic interactions of the mobile ions with the planar macroion surfaces. In addition, we demonstrate that our mean-field model is consistent with the contact theorem, and we outline its generalization to arbitrary interaction potentials through the use of a Laplace transformation. © EDP Sciences / Società Italiana di Fisica / Springer-Verlag 2011

  3. Accidental Kähler moduli inflation

    NASA Astrophysics Data System (ADS)

    Maharana, Anshuman; Rummel, Markus; Sumitomo, Yoske

    2015-09-01

    We study a model of accidental inflation in type IIB string theory where inflation occurs near the inflection point of a small Kähler modulus. A racetrack structure helps to alleviate the known concern that string-loop corrections may spoil Kähler Moduli Inflation unless having a significant suppression via the string coupling or a special brane setup. Also, the hierarchy of gauge group ranks required for the separation between moduli stabilization and inflationary dynamics is relaxed. The relaxation becomes more significant when we use the recently proposed D-term generated racetrack model.

  4. Cosmic microwave background probes models of inflation

    NASA Technical Reports Server (NTRS)

    Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.

    1992-01-01

    Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.

  5. 46 CFR 185.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... inflatable liferaft or inflatable buoyant apparatus must be serviced at a facility specifically approved by... apparatus, inflatable life jackets, and inflated rescue boats. 185.730 Section 185.730 Shipping COAST GUARD... Operational Readiness, Maintenance, and Inspection of Lifesaving Equipment § 185.730 Servicing of inflatable...

  6. 46 CFR 185.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... inflatable liferaft or inflatable buoyant apparatus must be serviced at a facility specifically approved by... apparatus, inflatable life jackets, and inflated rescue boats. 185.730 Section 185.730 Shipping COAST GUARD... Operational Readiness, Maintenance, and Inspection of Lifesaving Equipment § 185.730 Servicing of inflatable...

  7. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  8. Evolutionary inference via the Poisson Indel Process

    PubMed Central

    Bouchard-Côté, Alexandre; Jordan, Michael I.

    2013-01-01

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296

  9. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  10. Likelihood inference for COM-Poisson cure rate model with interval-censored data and Weibull lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, N

    2017-10-01

    In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.

  11. Herd-level risk factors for Campylobacter fetus infection, Brucella seropositivity and within-herd seroprevalence of brucellosis in cattle in northern Nigeria.

    PubMed

    Mai, H M; Irons, P C; Kabir, J; Thompson, P N

    2013-09-01

    Brucellosis and campylobacteriosis are economically important diseases affecting bovine reproductive efficiency in Nigeria. A questionnaire-based survey was conducted in 271 cattle herds in Adamawa, Kaduna and Kano states of northern Nigeria using multistage cluster sampling. Serum from 4745 mature animals was tested for Brucella antibodies using the Rose-Bengal plate test and positives were confirmed in series-testing protocol using competitive enzyme-linked immunosorbent assay. Preputial scrapings from 602 bulls were tested using culture and identification for Campylobacter fetus. For each disease, a herd was classified as positive if one or more animals tested positive. For each herd, information on potential managemental and environmental risk factors was collected through a questionnaire administered during an interview with the manager, owner or herdsman. Multiple logistic regression models were used to model the odds of herd infection for each disease. A zero-inflated Poisson model was used to model the count of Brucella-positive animals within herds, with the number tested as an exposure variable. The presence of small ruminants (sheep and/or goats) on the same farm, and buying-in of >3 new animals in the previous year or failure to practice quarantine were associated with increased odds of herd-level campylobacteriosis and brucellosis, as well as increased within-herd counts of Brucella-positive animals. In addition, high rainfall, initial acquisition of animals from markets, practice of gynaecological examination and failure to practice herd prophylactic measures were positively associated with the odds of C. fetus infection in the herd. Herd size of >15, pastoral management system and presence of handling facility on the farm were associated with increased odds, and gynaecological examination with reduced odds of herd-level Brucella seropositivity. Furthermore, the zero-inflated Poisson model showed that borrowing or sharing of bulls was associated with

  12. Tribrid Inflation in Supergravity

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Dutta, Koushik; Kostka, Philipp M.

    We propose a novel class of F-term hybrid inflation models in supergravity (SUGRA) where the η-problem is resolved using either a Heisenberg symmetry or a shift symmetry of the Kähler potential. In addition to the inflaton and the waterfall field, this class (referred to as tribrid inflation) contains a third "driving" field which contributes the large vacuum energy during inflation by its F-term. In contrast to the "standard" hybrid scenario, it has several attractive features due to the property of vanishing inflationary superpotential (Winf = 0) during inflation. Quantum corrections induced by symmetry breaking terms in the superpotential generate a slope of the potential and lead to a spectral tilt consistent with recent WMAP observations.

  13. Numerical modelling of Mars supersonic disk-gap-band parachute inflation

    NASA Astrophysics Data System (ADS)

    Gao, Xinglong; Zhang, Qingbin; Tang, Qiangang

    2016-06-01

    The transient dynamic behaviour of supersonic disk-gap-band parachutes in a Mars entry environment involving fluid structure interactions is studied. Based on the multi-material Arbitrary Lagrange-Euler method, the coupling dynamic model between a viscous compressible fluid and a flexible large deformation structure of the parachute is solved. The inflation performance of a parachute with a fixed forebody under different flow conditions is analysed. The decelerating parameters of the parachute, including drag area, opening loads, and coefficients, are obtained from the supersonic wind tunnel test data from NASA. Meanwhile, the evolution of the three-dimensional shape of the disk-gap-band parachute during supersonic inflation is presented, and the structural dynamic behaviour of the parachute is predicted. Then, the influence of the presence of the capsule on the flow field of the parachute is investigated, and the wake of unsteady fluid and the distribution of shock wave around the supersonic parachute are presented. Finally, the structural dynamic response of the canopy fabric under high-pressure conditions is comparatively analysed. The results show that the disk-gap-band parachute is well inflated without serious collapse. As the Mach numbers increase from 2.0 to 2.5, the drag coefficients gradually decrease, along with a small decrease in inflation time, which corresponds with test results, and proves the validity of the method proposed in this paper.

  14. Statistical procedures for analyzing mental health services data.

    PubMed

    Elhai, Jon D; Calhoun, Patrick S; Ford, Julian D

    2008-08-15

    In mental health services research, analyzing service utilization data often poses serious problems, given the presence of substantially skewed data distributions. This article presents a non-technical introduction to statistical methods specifically designed to handle the complexly distributed datasets that represent mental health service use, including Poisson, negative binomial, zero-inflated, and zero-truncated regression models. A flowchart is provided to assist the investigator in selecting the most appropriate method. Finally, a dataset of mental health service use reported by medical patients is described, and a comparison of results across several different statistical methods is presented. Implications of matching data analytic techniques appropriately with the often complexly distributed datasets of mental health services utilization variables are discussed.

  15. Stochastic effects in hybrid inflation

    NASA Astrophysics Data System (ADS)

    Martin, Jérôme; Vennin, Vincent

    2012-02-01

    Hybrid inflation is a two-field model where inflation ends due to an instability. In the neighborhood of the instability point, the potential is very flat and the quantum fluctuations dominate over the classical motion of the inflaton and waterfall fields. In this article, we study this regime in the framework of stochastic inflation. We numerically solve the two coupled Langevin equations controlling the evolution of the fields and compute the probability distributions of the total number of e-folds and of the inflation exit point. Then, we discuss the physical consequences of our results, in particular, the question of how the quantum diffusion can affect the observable predictions of hybrid inflation.

  16. Inverse Jacobi multiplier as a link between conservative systems and Poisson structures

    NASA Astrophysics Data System (ADS)

    García, Isaac A.; Hernández-Bermejo, Benito

    2017-08-01

    Some aspects of the relationship between conservativeness of a dynamical system (namely the preservation of a finite measure) and the existence of a Poisson structure for that system are analyzed. From the local point of view, due to the flow-box theorem we restrict ourselves to neighborhoods of singularities. In this sense, we characterize Poisson structures around the typical zero-Hopf singularity in dimension 3 under the assumption of having a local analytic first integral with non-vanishing first jet by connecting with the classical Poincaré center problem. From the global point of view, we connect the property of being strictly conservative (the invariant measure must be positive) with the existence of a Poisson structure depending on the phase space dimension. Finally, weak conservativeness in dimension two is introduced by the extension of inverse Jacobi multipliers as weak solutions of its defining partial differential equation and some of its applications are developed. Examples including Lotka-Volterra systems, quadratic isochronous centers, and non-smooth oscillators are provided.

  17. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    PubMed Central

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  18. Exploring the effects of roadway characteristics on the frequency and severity of head-on crashes: case studies from Malaysian federal roads.

    PubMed

    Hosseinpour, Mehdi; Yahaya, Ahmad Shukri; Sadullah, Ahmad Farhan

    2014-01-01

    Head-on crashes are among the most severe collision types and of great concern to road safety authorities. Therefore, it justifies more efforts to reduce both the frequency and severity of this collision type. To this end, it is necessary to first identify factors associating with the crash occurrence. This can be done by developing crash prediction models that relate crash outcomes to a set of contributing factors. This study intends to identify the factors affecting both the frequency and severity of head-on crashes that occurred on 448 segments of five federal roads in Malaysia. Data on road characteristics and crash history were collected on the study segments during a 4-year period between 2007 and 2010. The frequency of head-on crashes were fitted by developing and comparing seven count-data models including Poisson, standard negative binomial (NB), random-effect negative binomial, hurdle Poisson, hurdle negative binomial, zero-inflated Poisson, and zero-inflated negative binomial models. To model crash severity, a random-effect generalized ordered probit model (REGOPM) was used given a head-on crash had occurred. With respect to the crash frequency, the random-effect negative binomial (RENB) model was found to outperform the other models according to goodness of fit measures. Based on the results of the model, the variables horizontal curvature, terrain type, heavy-vehicle traffic, and access points were found to be positively related to the frequency of head-on crashes, while posted speed limit and shoulder width decreased the crash frequency. With regard to the crash severity, the results of REGOPM showed that horizontal curvature, paved shoulder width, terrain type, and side friction were associated with more severe crashes, whereas land use, access points, and presence of median reduced the probability of severe crashes. Based on the results of this study, some potential countermeasures were proposed to minimize the risk of head-on crashes. Copyright

  19. Beyond Inflation:. A Cyclic Universe Scenario

    NASA Astrophysics Data System (ADS)

    Turok, Neil; Steinhardt, Paul J.

    2005-08-01

    Inflation has been the leading early universe scenario for two decades, and has become an accepted element of the successful 'cosmic concordance' model. However, there are many puzzling features of the resulting theory. It requires both high energy and low energy inflation, with energy densities differing by a hundred orders of magnitude. The questions of why the universe started out undergoing high energy inflation, and why it will end up in low energy inflation, are unanswered. Rather than resort to anthropic arguments, we have developed an alternative cosmology, the cyclic universe [1], in which the universe exists in a very long-lived attractor state determined by the laws of physics. The model shares inflation's phenomenological successes without requiring an epoch of high energy inflation. Instead, the universe is made homogeneous and flat, and scale-invariant adiabatic perturbations are generated during an epoch of low energy acceleration like that seen today, but preceding the last big bang. Unlike inflation, the model requires low energy acceleration in order for a periodic attractor state to exist. The key challenge facing the scenario is that of passing through the cosmic singularity at t = 0. Substantial progress has been made at the level of linearised gravity, which is reviewed here. The challenge of extending this to nonlinear gravity and string theory remains.

  20. Beyond Inflation: A Cyclic Universe Scenario

    NASA Astrophysics Data System (ADS)

    Turok, Neil; Steinhardt, Paul J.

    2005-01-01

    Inflation has been the leading early universe scenario for two decades, and has become an accepted element of the successful `cosmic concordance' model. However, there are many puzzling features of the resulting theory. It requires both high energy and low energy inflation, with energy densities differing by a hundred orders of magnitude. The questions of why the universe started out undergoing high energy inflation, and why it will end up in low energy inflation, are unanswered. Rather than resort to anthropic arguments, we have developed an alternative cosmology, the cyclic universe, in which the universe exists in a very long-lived attractor state determined by the laws of physics. The model shares inflation's phenomenological successes without requiring an epoch of high energy inflation. Instead, the universe is made homogeneous and flat, and scale-invariant adiabatic perturbations are generated during an epoch of low energy acceleration like that seen today, but preceding the last big bang. Unlike inflation, the model requires low energy acceleration in order for a periodic attractor state to exist. The key challenge facing the scenario is that of passing through the cosmic singularity at t = 0. Substantial progress has been made at the level of linearised gravity, which is reviewed here. The challenge of extending this to nonlinear gravity and string theory remains.

  1. Open inflation in the landscape

    NASA Astrophysics Data System (ADS)

    Yamauchi, Daisuke; Linde, Andrei; Naruko, Atsushi; Sasaki, Misao; Tanaka, Takahiro

    2011-08-01

    The open inflation scenario is attracting a renewed interest in the context of the string landscape. Since there are a large number of metastable de Sitter vacua in the string landscape, tunneling transitions to lower metastable vacua through the bubble nucleation occur quite naturally, which leads to a natural realization of open inflation. Although the deviation of Ω0 from unity is small by the observational bound, we argue that the effect of this small deviation on the large-angle CMB anisotropies can be significant for tensor-type perturbation in the open inflation scenario. We consider the situation in which there is a large hierarchy between the energy scale of the quantum tunneling and that of the slow-roll inflation in the nucleated bubble. If the potential just after tunneling is steep enough, a rapid-roll phase appears before the slow-roll inflation. In this case the power spectrum is basically determined by the Hubble rate during the slow-roll inflation. On the other hand, if such a rapid-roll phase is absent, the power spectrum keeps the memory of the high energy density there in the large angular components. Furthermore, the amplitude of large angular components can be enhanced due to the effects of the wall fluctuation mode if the bubble wall tension is small. Therefore, although even the dominant quadrupole component is suppressed by the factor (1-Ω0)2, one can construct some models in which the deviation of Ω0 from unity is large enough to produce measurable effects. We also consider a more general class of models, where the false vacuum decay may occur due to Hawking-Moss tunneling, as well as the models involving more than one scalar field. We discuss scalar perturbations in these models and point out that a large set of such models is already ruled out by observational data, unless there was a very long stage of slow-roll inflation after the tunneling. These results show that observational data allow us to test various assumptions concerning

  2. Natural inflation and quantum gravity.

    PubMed

    de la Fuente, Anton; Saraswat, Prashant; Sundrum, Raman

    2015-04-17

    Cosmic inflation provides an attractive framework for understanding the early Universe and the cosmic microwave background. It can readily involve energies close to the scale at which quantum gravity effects become important. General considerations of black hole quantum mechanics suggest nontrivial constraints on any effective field theory model of inflation that emerges as a low-energy limit of quantum gravity, in particular, the constraint of the weak gravity conjecture. We show that higher-dimensional gauge and gravitational dynamics can elegantly satisfy these constraints and lead to a viable, theoretically controlled and predictive class of natural inflation models.

  3. A semi-nonparametric Poisson regression model for analyzing motor vehicle crash data.

    PubMed

    Ye, Xin; Wang, Ke; Zou, Yajie; Lord, Dominique

    2018-01-01

    This paper develops a semi-nonparametric Poisson regression model to analyze motor vehicle crash frequency data collected from rural multilane highway segments in California, US. Motor vehicle crash frequency on rural highway is a topic of interest in the area of transportation safety due to higher driving speeds and the resultant severity level. Unlike the traditional Negative Binomial (NB) model, the semi-nonparametric Poisson regression model can accommodate an unobserved heterogeneity following a highly flexible semi-nonparametric (SNP) distribution. Simulation experiments are conducted to demonstrate that the SNP distribution can well mimic a large family of distributions, including normal distributions, log-gamma distributions, bimodal and trimodal distributions. Empirical estimation results show that such flexibility offered by the SNP distribution can greatly improve model precision and the overall goodness-of-fit. The semi-nonparametric distribution can provide a better understanding of crash data structure through its ability to capture potential multimodality in the distribution of unobserved heterogeneity. When estimated coefficients in empirical models are compared, SNP and NB models are found to have a substantially different coefficient for the dummy variable indicating the lane width. The SNP model with better statistical performance suggests that the NB model overestimates the effect of lane width on crash frequency reduction by 83.1%.

  4. Poisson-Lie duals of the η deformed symmetric space sigma model

    NASA Astrophysics Data System (ADS)

    Hoare, Ben; Seibold, Fiona K.

    2017-11-01

    Poisson-Lie dualising the η deformation of the G/H symmetric space sigma model with respect to the simple Lie group G is conjectured to give an analytic continuation of the associated λ deformed model. In this paper we investigate when the η deformed model can be dualised with respect to a subgroup G0 of G. Starting from the first-order action on the complexified group and integrating out the degrees of freedom associated to different subalgebras, we find it is possible to dualise when G0 is associated to a sub-Dynkin diagram. Additional U1 factors built from the remaining Cartan generators can also be included. The resulting construction unifies both the Poisson-Lie dual with respect to G and the complete abelian dual of the η deformation in a single framework, with the integrated algebras unimodular in both cases. We speculate that extending these results to the path integral formalism may provide an explanation for why the η deformed AdS5 × S5 superstring is not one-loop Weyl invariant, that is the couplings do not solve the equations of type IIB supergravity, yet its complete abelian dual and the λ deformed model are.

  5. Degravitation, inflation and the cosmological constant as an afterglow

    NASA Astrophysics Data System (ADS)

    Patil, Subodh P.

    2009-01-01

    In this report, we adopt the phenomenological approach of taking the degravitation paradigm seriously as a consistent modification of gravity in the IR, and investigate its consequences for various cosmological situations. We motivate degravitation — where Netwon's constant is promoted to a scale dependent filter function — as arising from either a small (resonant) mass for the graviton, or as an effect in semi-classical gravity. After addressing how the Bianchi identities are to be satisfied in such a set up, we turn our attention towards the cosmological consequences of degravitation. By considering the example filter function corresponding to a resonantly massive graviton (with a filter scale larger than the present horizon scale), we show that slow roll inflation, hybrid inflation and old inflation remain quantitatively unchanged. We also find that the degravitation mechanism inherits a memory of past energy densities in the present epoch in such a way that is likely significant for present cosmological evolution. For example, if the universe underwent inflation in the past due to it having tunneled out of some false vacuum, we find that degravitation implies a remnant `afterglow' cosmological constant, whose scale immediately afterwards is parametrically suppressed by the filter scale (L) in Planck units Λ ~ l2pl/L2. We discuss circumstances through which this scenario reasonably yields the presently observed value for Λ ~ O(10-120). We also find that in a universe still currently trapped in some false vacuum state, resonance graviton models of degravitation only degravitate initially Planck or GUT scale energy densities down to the presently observed value over timescales comparable to the filter scale. We argue that different functional forms for the filter function will yield similar conclusions. In this way, we argue that although the degravitation models we study have the potential to explain why the cosmological constant is not large in addition to

  6. Paretian Poisson Processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

  7. Observational constraints on successful model of quintessential Inflation

    NASA Astrophysics Data System (ADS)

    Geng, Chao-Qiang; Lee, Chung-Chi; Sami, M.; Saridakis, Emmanuel N.; Starobinsky, Alexei A.

    2017-06-01

    We study quintessential inflation using a generalized exponential potential V(phi)propto \\exp(-λ phin/MPln), n>1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field phi dominant once again at late times giving rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n=6 (8), the parameter λ is constrained to be, log λ > -7.29 (-11.7) the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as ns = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r<1.72 × 10-2 (2.32 × 10-2) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ mν lesssim 2.5 eV significantly enhances compared to that in the standard ΛCDM model.

  8. 46 CFR 131.865 - Inflatable liferafts and inflatable buoyant apparatus.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Inflatable liferafts and inflatable buoyant apparatus... SUPPLY VESSELS OPERATIONS Markings for Fire Equipment and Emergency Equipment § 131.865 Inflatable liferafts and inflatable buoyant apparatus. The number of the inflatable liferaft or inflatable buoyant...

  9. 46 CFR 131.865 - Inflatable liferafts and inflatable buoyant apparatus.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Inflatable liferafts and inflatable buoyant apparatus... SUPPLY VESSELS OPERATIONS Markings for Fire Equipment and Emergency Equipment § 131.865 Inflatable liferafts and inflatable buoyant apparatus. The number of the inflatable liferaft or inflatable buoyant...

  10. Chameleon field dynamics during inflation

    NASA Astrophysics Data System (ADS)

    Saba, Nasim; Farhoudi, Mehrdad

    By studying the chameleon model during inflation, we investigate whether it can be a successful inflationary model, wherein we employ the common typical potential usually used in the literature. Thus, in the context of the slow-roll approximations, we obtain the e-folding number for the model to verify the ability of resolving the problems of standard big bang cosmology. Meanwhile, we apply the constraints on the form of the chosen potential and also on the equation of state parameter coupled to the scalar field. However, the results of the present analysis show that there is not much chance of having the chameleonic inflation. Hence, we suggest that if through some mechanism the chameleon model can be reduced to the standard inflationary model, then it may cover the whole era of the universe from the inflation up to the late time.

  11. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  12. Chaotic inflation from nonlinear sigma models in supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellerman, Simeon; Kehayias, John; Yanagida, Tsutomu T.

    2015-02-11

    We present a common solution to the puzzles of the light Higgs or quark masses and the need for a shift symmetry and large field values in high scale chaotic inflation. One way to protect, for example, the Higgs from a large supersymmetric mass term is if it is the Nambu–Goldstone boson (NGB) of a nonlinear sigma model. However, it is well known that nonlinear sigma models (NLSMs) with nontrivial Kähler transformations are problematic to couple to supergravity. An additional field is necessary to make theKähler potential of the NLSM invariant in supergravity. This field must have a shift symmetrymore » — making it a candidate for the inflaton (or axion). We give an explicit example of such a model for the coset space SU(3)/SU(2) × U(1), with the Higgs as the NGB, including breaking the inflaton’s shift symmetry and producing a chaotic inflation potential. This construction can also be applied to other models, such as one based on E₇/SO(10) × U(1) × U(1) which incorporates the first two generations of (light) quarks as the Nambu–Goldstone multiplets, and has an axion in addition to the inflaton. Along the way we clarify and connect previous work on understanding NLSMs in supergravity and the origin of the extra field (which is the inflaton here), including a connection to Witten–Bagger quantization. This framework has wide applications to model building; a light particle from a NLSM requires, in supergravity, exactly the structure for chaotic inflaton or an axion« less

  13. Inflation and the Capital Budgeting Process.

    DTIC Science & Technology

    1985-04-01

    model . [10:22] Friend, Landskroner and Losq assert that the traditional capital asset pricing model *( CAPM ...value (NPV) capital budgeting model is used extensively in this report and the Consumer Price Index - Urban (CPI-U) and the Wholesale Price Index (WPI...general price level adjustments into the capital budgeting model . The consideration of inflation risk is also warranted. The effects of inflation

  14. Nonlinear Pressurization and Modal Analysis Procedure for Dynamic Modeling of Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    An introduction and set of guidelines for finite element dynamic modeling of nonrigidized inflatable structures is provided. A two-step approach is presented, involving 1) nonlinear static pressurization of the structure and updating of the stiffness matrix and 2) hear normal modes analysis using the updated stiffness. Advantages of this approach are that it provides physical realism in modeling of pressure stiffening, and it maintains the analytical convenience of a standard bear eigensolution once the stiffness has been modified. Demonstration of the approach is accomplished through the creation and test verification of an inflated cylinder model using a large commercial finite element code. Good frequency and mode shape comparisons are obtained with test data and previous modeling efforts, verifying the accuracy of the technique. Problems encountered in the application of the approach, as well as their solutions, are discussed in detail.

  15. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  16. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE PAGES

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  17. The observational constraint on constant-roll inflation

    NASA Astrophysics Data System (ADS)

    Gao, Qing

    2018-07-01

    We discuss the constant-roll inflation with constant ɛ2 and constant \\bar η . By using the method of Bessel function approximation, the analytical expressions for the scalar and tensor power spectra, the scalar and tensor spectral tilts, and the tensor to scalar ratio are derived up to the first order of ɛ1. The model with constant ɛ2 is ruled out by the observations at the 3σ confidence level, and the model with constant \\bar η is consistent with the observations at the 1σ confidence level. The potential for the model with constant \\bar η is also obtained from the Hamilton-Jacobi equation. Although the observations constrain the constant-roll inflation to be the slow-roll inflation, the n s- r results from the constant-roll inflation are not the same as those from the slow-roll inflation even when \\bar η 0.01.

  18. Inflatable Dark Matter.

    PubMed

    Davoudiasl, Hooman; Hooper, Dan; McDermott, Samuel D

    2016-01-22

    We describe a general scenario, dubbed "inflatable dark matter," in which the density of dark matter particles can be reduced through a short period of late-time inflation in the early Universe. The overproduction of dark matter that is predicted within many, otherwise, well-motivated models of new physics can be elegantly remedied within this context. Thermal relics that would, otherwise, be disfavored can easily be accommodated within this class of scenarios, including dark matter candidates that are very heavy or very light. Furthermore, the nonthermal abundance of grand unified theory or Planck scale axions can be brought to acceptable levels without invoking anthropic tuning of initial conditions. A period of late-time inflation could have occurred over a wide range of scales from ∼MeV to the weak scale or above, and could have been triggered by physics within a hidden sector, with small but not necessarily negligible couplings to the standard model.

  19. Study of non-Hodgkin's lymphoma mortality associated with industrial pollution in Spain, using Poisson models

    PubMed Central

    Ramis, Rebeca; Vidal, Enrique; García-Pérez, Javier; Lope, Virginia; Aragonés, Nuria; Pérez-Gómez, Beatriz; Pollán, Marina; López-Abente, Gonzalo

    2009-01-01

    Background Non-Hodgkin's lymphomas (NHLs) have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER) is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs) associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model). Results Only proximity of paper industries to population centres (>2 km) could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27). Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of industrial pollution

  20. Higgs Inflation in f(Φ, r) Theory

    NASA Astrophysics Data System (ADS)

    Chakravarty, Girish Kumar; Mohanty, Subhendra; Singh, Naveen K.

    2014-02-01

    We generalize the scalar-curvature coupling model ξΦ2R of Higgs inflation to ξΦaRb to study inflation. We compute the amplitude and spectral index of curvature perturbations generated during inflation and fix the parameters of the model by comparing these with the Planck + WP data. We find that if the scalar self-coupling λ is in the range 10-5-0.1, parameter a in the range 2.3-3.6 and b in the range 0.77-0.22 at the Planck scale, one can have a viable inflation model even for ξ ≃ 1. The tensor to scalar ratio r in this model is small and our model with scalar-curvature couplings is not ruled out by observational limits on r unlike the pure (λ )/(4) Φ 4 theory. By requiring the curvature coupling parameter to be of order unity, we have evaded the problem of unitarity violation in scalar-graviton scatterings which plague the ξΦ2R Higgs inflation models. We conclude that the Higgs field may still be a good candidate for being the inflaton in the early universe if one considers higher-dimensional curvature coupling.

  1. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    PubMed Central

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  2. The relationship between inflation and inflation uncertainty. Empirical evidence for the newest EU countries.

    PubMed

    Viorica, Daniela; Jemna, Danut; Pintilescu, Carmen; Asandului, Mircea

    2014-01-01

    The objective of this paper is to verify the hypotheses presented in the literature on the causal relationship between inflation and its uncertainty, for the newest EU countries. To ensure the robustness of the results, in the study four models for inflation uncertainty are estimated in parallel: ARCH (1), GARCH (1,1), EGARCH (1,1,1) and PARCH (1,1,1). The Granger method is used to test the causality between two variables. The working hypothesis is that groups of countries with a similar political and economic background in 1990 and are likely to be characterized by the same causal relationship between inflation and inflation uncertainty. Empirical results partially confirm this hypothesis. C22, E31, E37.

  3. 46 CFR 122.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Inspection of Lifesaving Equipment § 122.730 Servicing of inflatable liferafts, inflatable buoyant apparatus... apparatus must be serviced at a facility specifically approved by the Commandant for the particular brand... apparatus, inflatable life jackets, and inflated rescue boats. 122.730 Section 122.730 Shipping COAST GUARD...

  4. 46 CFR 122.730 - Servicing of inflatable liferafts, inflatable buoyant apparatus, inflatable life jackets, and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Inspection of Lifesaving Equipment § 122.730 Servicing of inflatable liferafts, inflatable buoyant apparatus... apparatus must be serviced at a facility specifically approved by the Commandant for the particular brand... apparatus, inflatable life jackets, and inflated rescue boats. 122.730 Section 122.730 Shipping COAST GUARD...

  5. INFLATE: INFlate Landing Apparatus Technology

    NASA Astrophysics Data System (ADS)

    Koryanov, V. V. K.; Da-Poian, V. D. P.

    2018-02-01

    Our project, named INFLATE (INFlatable Landing Apparatus Technology), aims at reducing space landing risks and constraints and so optimizing space missions (reducing cost, mass, and risk and in the same time improving performance).

  6. Poisson property of the occurrence of flip-flops in a model membrane.

    PubMed

    Arai, Noriyoshi; Akimoto, Takuma; Yamamoto, Eiji; Yasui, Masato; Yasuoka, Kenji

    2014-02-14

    How do lipid molecules in membranes perform a flip-flop? The flip-flops of lipid molecules play a crucial role in the formation and flexibility of membranes. However, little has been determined about the behavior of flip-flops, either experimentally, or in molecular dynamics simulations. Here, we provide numerical results of the flip-flops of model lipid molecules in a model membrane and investigate the statistical properties, using millisecond-order coarse-grained molecular simulations (dissipative particle dynamics). We find that there are three different ways of flip-flops, which can be clearly characterized by their paths on the free energy surface. Furthermore, we found that the probability of the number of the flip-flops is well fitted by the Poisson distribution, and the probability density function for the inter-occurrence times of flip-flops coincides with that of the forward recurrence times. These results indicate that the occurrence of flip-flops is a Poisson process, which will play an important role in the flexibilities of membranes.

  7. Computational knee ligament modeling using experimentally determined zero-load lengths.

    PubMed

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models.

  8. Computational Knee Ligament Modeling Using Experimentally Determined Zero-Load Lengths

    PubMed Central

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models. PMID:22523522

  9. Variability of the 2014-present inflation source at Mauna Loa volcano revealed using time-dependent modeling

    NASA Astrophysics Data System (ADS)

    Johanson, I. A.; Miklius, A.; Okubo, P.; Montgomery-Brown, E. K.

    2017-12-01

    Mauna Loa volcano is the largest active volcano on earth and in the 20thcentury produced roughly one eruption every seven years. The 33-year quiescence since its last eruption 1984 has been punctuated by three inflation episodes where magma likely entered the shallow plumbing system, but was not erupted. The most recent began in 2014 and is ongoing. Unlike prior inflation episodes, the current one is accompanied by a significant increase in shallow seismicity, a pattern that is similar to earlier pre-eruptive periods. We apply the Kalman filter based Network Inversion Filter (NIF) to the 2014-present inflation episode using data from a 27 station continuous GPS network on Mauna Loa. The model geometry consists of a point volume source and tabular, dike-like body, which have previously been shown to provide a good fit to deformation data from a 2004-2009 inflation episode. The tabular body is discretized into 1km x 1km segments. For each day, the NIF solves for the rates of opening on the tabular body segments (subject to smoothing and positivity constraints), volume change rate in the point source, and slip rate on a deep décollement fault surface, which is constrained to a constant (no transient slip allowed). The Kalman filter in the NIF provides for smoothing both forwards and backwards in time. The model shows that the 2014-present inflation episode occurred as several sub-events, rather than steady inflation. It shows some spatial variability in the location of the inflation sub-events. In the model, opening in the tabular body is initially concentrated below the volcano's summit, in an area roughly outlined by shallow seismicity. In October, 2015 opening in the tabular body shifts to be centered beneath the southwest portion of the summit and seismicity becomes concentrated in this area. By late 2016, the opening rate on the tabular body decreases and is once again under the central part of summit. This modeling approach has allowed us to track these

  10. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less

  11. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  12. Galerkin methods for Boltzmann-Poisson transport with reflection conditions on rough boundaries

    NASA Astrophysics Data System (ADS)

    Morales Escalante, José A.; Gamba, Irene M.

    2018-06-01

    We consider in this paper the mathematical and numerical modeling of reflective boundary conditions (BC) associated to Boltzmann-Poisson systems, including diffusive reflection in addition to specularity, in the context of electron transport in semiconductor device modeling at nano scales, and their implementation in Discontinuous Galerkin (DG) schemes. We study these BC on the physical boundaries of the device and develop a numerical approximation to model an insulating boundary condition, or equivalently, a pointwise zero flux mathematical condition for the electron transport equation. Such condition balances the incident and reflective momentum flux at the microscopic level, pointwise at the boundary, in the case of a more general mixed reflection with momentum dependant specularity probability p (k →). We compare the computational prediction of physical observables given by the numerical implementation of these different reflection conditions in our DG scheme for BP models, and observe that the diffusive condition influences the kinetic moments over the whole domain in position space.

  13. Primordial blackholes and gravitational waves for an inflection-point model of inflation

    NASA Astrophysics Data System (ADS)

    Choudhury, Sayantan; Mazumdar, Anupam

    2014-06-01

    In this article we provide a new closed relationship between cosmic abundance of primordial gravitational waves and primordial blackholes that originated from initial inflationary perturbations for inflection-point models of inflation where inflation occurs below the Planck scale. The current Planck constraint on tensor-to-scalar ratio, running of the spectral tilt, and from the abundance of dark matter content in the universe, we can deduce a strict bound on the current abundance of primordial blackholes to be within a range, 9.99712 ×10-3 <ΩPBHh2 < 9.99736 ×10-3.

  14. Schrödinger-Poisson-Vlasov-Poisson correspondence

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Lancaster, Lachlan; Fialkov, Anastasia; Becerra, Fernando; Chavanis, Pierre-Henri

    2018-04-01

    The Schrödinger-Poisson equations describe the behavior of a superfluid Bose-Einstein condensate under self-gravity with a 3D wave function. As ℏ/m →0 , m being the boson mass, the equations have been postulated to approximate the collisionless Vlasov-Poisson equations also known as the collisionless Boltzmann-Poisson equations. The latter describe collisionless matter with a 6D classical distribution function. We investigate the nature of this correspondence with a suite of numerical test problems in 1D, 2D, and 3D along with analytic treatments when possible. We demonstrate that, while the density field of the superfluid always shows order unity oscillations as ℏ/m →0 due to interference and the uncertainty principle, the potential field converges to the classical answer as (ℏ/m )2. Thus, any dynamics coupled to the superfluid potential is expected to recover the classical collisionless limit as ℏ/m →0 . The quantum superfluid is able to capture rich phenomena such as multiple phase-sheets, shell-crossings, and warm distributions. Additionally, the quantum pressure tensor acts as a regularizer of caustics and singularities in classical solutions. This suggests the exciting prospect of using the Schrödinger-Poisson equations as a low-memory method for approximating the high-dimensional evolution of the Vlasov-Poisson equations. As a particular example we consider dark matter composed of ultralight axions, which in the classical limit (ℏ/m →0 ) is expected to manifest itself as collisionless cold dark matter.

  15. Is there scale-dependent bias in single-field inflation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Putter, Roland; Doré, Olivier; Green, Daniel, E-mail: rdputter@caltech.edu, E-mail: Olivier.P.Dore@jpl.nasa.gov, E-mail: drgreen@cita.utoronto.ca

    2015-10-01

    Scale-dependent halo bias due to local primordial non-Gaussianity provides a strong test of single-field inflation. While it is universally understood that single-field inflation predicts negligible scale-dependent bias compared to current observational uncertainties, there is still disagreement on the exact level of scale-dependent bias at a level that could strongly impact inferences made from future surveys. In this paper, we clarify this confusion and derive in various ways that there is exactly zero scale-dependent bias in single-field inflation. Much of the current confusion follows from the fact that single-field inflation does predict a mode coupling of matter perturbations at the levelmore » of f{sub NL}{sup local}; ≈ −5/3, which naively would lead to scale-dependent bias. However, we show explicitly that this mode coupling cancels out when perturbations are evaluated at a fixed physical scale rather than fixed coordinate scale. Furthermore, we show how the absence of scale-dependent bias can be derived easily in any gauge. This result can then be incorporated into a complete description of the observed galaxy clustering, including the previously studied general relativistic terms, which are important at the same level as scale-dependent bias of order f{sub NL}{sup local} ∼ 1. This description will allow us to draw unbiased conclusions about inflation from future galaxy clustering data.« less

  16. Baby Skyrme model and fermionic zero modes

    NASA Astrophysics Data System (ADS)

    Queiruga, J. M.

    2016-09-01

    In this work we investigate some features of the fermionic sector of the supersymmetric version of the baby Skyrme model. We find that, in the background of Bogomol'nyi-Prasad-Sommerfield compact baby Skyrmions, fermionic zero modes are confined to the defect core. Further, we show that, while three Supersymmetry (SUSY) generators are broken in the defect core, SUSY is completely restored outside. We study also the effect of a D-term deformation of the model. Such a deformation allows for the existence of fermionic zero modes and broken SUSY outside the compact defect.

  17. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    PubMed

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  18. Perceptual disturbances predicted in zero-g through three-dimensional modeling.

    PubMed

    Holly, Jan E

    2003-01-01

    Perceptual disturbances in zero-g and 1-g differ. For example, the vestibular coriolis (or "cross-coupled") effect is weaker in zero-g. In 1-g, blindfolded subjects rotating on-axis experience perceptual disturbances upon head tilt, but the effects diminish in zero-g. Head tilts during centrifugation in zero-g and 1-g are investigated here by means of three-dimensional modeling, using a model that was previously used to explain the zero-g reduction of the on-axis vestibular coriolis effect. The model's foundation comprises the laws of physics, including linear-angular interactions in three dimensions. Addressed is the question: In zero-g, will the vestibular coriolis effect be as weak during centrifugation as during on-axis rotation? Centrifugation in 1-g was simulated first, with the subject supine, head toward center. The most noticeable result concerned direction of head yaw. For clockwise centrifuge rotation, greater perceptual effects arose in simulations during yaw counterclockwise (as viewed from the top of the head) than for yaw clockwise. Centrifugation in zero-g was then simulated with the same "supine" orientation. The result: In zero-g the simulated vestibular coriolis effect was greater during centrifugation than during on-axis rotation. In addition, clockwise-counterclockwise differences did not appear in zero-g, in contrast to the differences that appear in 1-g.

  19. Hilltop supernatural inflation and SUSY unified models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohri, Kazunori; Lim, C.S.; Lin, Chia-Min

    2014-01-01

    In this paper, we consider high scale (100TeV) supersymmetry (SUSY) breaking and realize the idea of hilltop supernatural inflation in concrete particle physics models based on flipped-SU(5)and Pati-Salam models in the framework of supersymmetric grand unified theories (SUSY GUTs). The inflaton can be a flat direction including right-handed sneutrino and the waterfall field is a GUT Higgs. The spectral index is n{sub s} = 0.96 which fits very well with recent data by PLANCK satellite. There is no both thermal and non-thermal gravitino problems. Non-thermal leptogenesis can be resulted from the decay of right-handed sneutrino which plays (part of) themore » role of inflaton.« less

  20. Hilltop supernatural inflation and SUSY unified models

    NASA Astrophysics Data System (ADS)

    Kohri, Kazunori; Lim, C. S.; Lin, Chia-Min; Mimura, Yukihiro

    2014-01-01

    In this paper, we consider high scale (100TeV) supersymmetry (SUSY) breaking and realize the idea of hilltop supernatural inflation in concrete particle physics models based on flipped-SU(5)and Pati-Salam models in the framework of supersymmetric grand unified theories (SUSY GUTs). The inflaton can be a flat direction including right-handed sneutrino and the waterfall field is a GUT Higgs. The spectral index is ns = 0.96 which fits very well with recent data by PLANCK satellite. There is no both thermal and non-thermal gravitino problems. Non-thermal leptogenesis can be resulted from the decay of right-handed sneutrino which plays (part of) the role of inflaton.

  1. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  2. Very low scale Coleman-Weinberg inflation with nonminimal coupling

    NASA Astrophysics Data System (ADS)

    Kaneta, Kunio; Seto, Osamu; Takahashi, Ryo

    2018-03-01

    We study viable small-field Coleman-Weinberg (CW) inflation models with the help of nonminimal coupling to gravity. The simplest small-field CW inflation model (with a low-scale potential minimum) is incompatible with the cosmological constraint on the scalar spectral index. However, there are possibilities to make the model realistic. First, we revisit the CW inflation model supplemented with a linear potential term. We next consider the CW inflation model with a logarithmic nonminimal coupling and illustrate that the model can open a new viable parameter space that includes the model with a linear potential term. We also show parameter spaces where the Hubble scale during the inflation can be as small as 10-4 GeV , 1 GeV, 1 04 GeV , and 1 08 GeV for the number of e -folds of 40, 45, 50, and 55, respectively, with other cosmological constraints being satisfied.

  3. Nonlocal Poisson-Fermi double-layer models: Effects of nonuniform ion sizes on double-layer structure

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Jiang, Yi

    2018-05-01

    This paper reports a nonuniform ionic size nonlocal Poisson-Fermi double-layer model (nuNPF) and a uniform ionic size nonlocal Poisson-Fermi double-layer model (uNPF) for an electrolyte mixture of multiple ionic species, variable voltages on electrodes, and variable induced charges on boundary segments. The finite element solvers of nuNPF and uNPF are developed and applied to typical double-layer tests defined on a rectangular box, a hollow sphere, and a hollow rectangle with a charged post. Numerical results show that nuNPF can significantly improve the quality of the ionic concentrations and electric fields generated from uNPF, implying that the effect of nonuniform ion sizes is a key consideration in modeling the double-layer structure.

  4. Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation

    NASA Astrophysics Data System (ADS)

    Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo

    2014-03-01

    Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.

  5. Observational constraints on successful model of quintessential Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Chao-Qiang; Lee, Chung-Chi; Sami, M.

    We study quintessential inflation using a generalized exponential potential V (φ)∝ exp(−λ φ {sup n} / M {sub Pl} {sup n} ), n >1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field φ dominant once again at late times givingmore » rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n =6 (8), the parameter λ is constrained to be, log λ > −7.29 (−11.7); the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as n {sub s} = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r <1.72 × 10{sup −2} (2.32 × 10{sup −2}) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ m {sub ν} ∼< 2.5 eV significantly enhances compared to that in the standard ΛCDM model.« less

  6. Tribrid Inflation in Supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antusch, Stefan; Dutta, Koushik; Kostka, Philipp M.

    2010-02-10

    We propose a novel class of F-term hybrid inflation models in supergravity (SUGRA) where the eta-problem is resolved using either a Heisenberg symmetry or a shift symmetry of the Kaehler potential. In addition to the inflaton and the waterfall field, this class (referred to as tribrid inflation) contains a third 'driving' field which contributes the large vacuum energy during inflation by its F-term. In contrast to the 'standard' hybrid scenario, it has several attractive features due to the property of vanishing inflationary superpotential (W{sub inf} = 0) during inflation. While the symmetries of the Kaehler potential ensure a flat inflatonmore » potential at tree-level, quantum corrections induced by symmetry breaking terms in the superpotential generate a slope of the potential and lead to a spectral tilt consistent with recent WMAP observations.« less

  7. The Inflatable Poster

    NASA Astrophysics Data System (ADS)

    Tackley, P. J.

    2004-12-01

    Inflatable devices are frequently used in advertising in order to grab the attention of consumers: one sees, for example, 20 foot tall inflatable drink containers, inflatable cell phones, inflatable bubble gum packets, as well as blimps wafting majestically over major sports events. More usefully, inflatable representations of scientifically-interesting items are widely available, including astronauts, space shuttles, dinosaurs and globes and can help to build and inspire the interest of the general public, and in particular children, in such ideas. How can such concepts be adapted to improve poster presentations? Possibility one is to use relevant existing commercially-available inflatables to dress the poster: skeletons, astronauts, globes and so forth. More exciting is to develop custom inflatables that represent three-dimensional renderings of objects that the poster is describing. Examples of individual objects might be an inflatable slab, inflatable avalanche, inflatable plume, or it's larger cousin, the 10 foot high inflatable superplume or 20 foot high inflatable megaplume. More elaborately, inflatables might represent isosurfaces in three-dimensional spherical convection, although other fabrication methods may be more suitable. More simply, inflatable spheres could be imprinted with the planform of convection, geoid, or other spherical fields of geophysical interest. Finally, it should be possible to put an entire poster on an inflatable object, possibly small ones (balloons) to hand out. A major concern, however, is that the presenter may use such techniques to inflate their scientific findings, or to present overblown ideas.

  8. The Relationship between Inflation and Inflation Uncertainty. Empirical Evidence for the Newest EU Countries

    PubMed Central

    Viorica, Daniela; Jemna, Danut; Pintilescu, Carmen; Asandului, Mircea

    2014-01-01

    The objective of this paper is to verify the hypotheses presented in the literature on the causal relationship between inflation and its uncertainty, for the newest EU countries. To ensure the robustness of the results, in the study four models for inflation uncertainty are estimated in parallel: ARCH (1), GARCH (1,1), EGARCH (1,1,1) and PARCH (1,1,1). The Granger method is used to test the causality between two variables. The working hypothesis is that groups of countries with a similar political and economic background in 1990 and are likely to be characterized by the same causal relationship between inflation and inflation uncertainty. Empirical results partially confirm this hypothesis. Jel Classification C22, E31, E37. PMID:24633073

  9. Multi-material Additive Manufacturing of Metamaterials with Giant, Tailorable Negative Poisson's Ratios.

    PubMed

    Chen, Da; Zheng, Xiaoyu

    2018-06-14

    Nature has evolved with a recurring strategy to achieve unusual mechanical properties through coupling variable elastic moduli from a few GPa to below KPa within a single tissue. The ability to produce multi-material, three-dimensional (3D) micro-architectures with high fidelity incorporating dissimilar components has been a major challenge in man-made materials. Here we show multi-modulus metamaterials whose architectural element is comprised of encoded elasticity ranging from rigid to soft. We found that, in contrast to ordinary architected materials whose negative Poisson's ratio is dictated by their geometry, these type of metamaterials are capable of displaying Poisson's ratios from extreme negative to zero, independent of their 3D micro-architecture. The resulting low density metamaterials is capable of achieving functionally graded, distributed strain amplification capabilities within the metamaterial with uniform micro-architectures. Simultaneous tuning of Poisson's ratio and moduli within the 3D multi-materials could open up a broad array of material by design applications ranging from flexible armor, artificial muscles, to actuators and bio-mimetic materials.

  10. Extended inflation from higher dimensional theories

    NASA Technical Reports Server (NTRS)

    Holman, Richard; Kolb, Edward W.; Vadas, Sharon L.; Wang, Yun

    1990-01-01

    The possibility is considered that higher dimensional theories may, upon reduction to four dimensions, allow extended inflation to occur. Two separate models are analayzed. One is a very simple toy model consisting of higher dimensional gravity coupled to a scalar field whose potential allows for a first-order phase transition. The other is a more sophisticated model incorporating the effects of non-trivial field configurations (monopole, Casimir, and fermion bilinear condensate effects) that yield a non-trivial potential for the radius of the internal space. It was found that extended inflation does not occur in these models. It was also found that the bubble nucleation rate in these theories is time dependent unlike the case in the original version of extended inflation.

  11. Dental Caries and Enamel Defects in Very Low Birth Weight Adolescents

    PubMed Central

    Nelson, S.; Albert, J.M.; Lombardi, G.; Wishnek, S.; Asaad, G.; Kirchner, H.L.; Singer, L.T.

    2011-01-01

    Objectives The purpose of this study was to examine developmental enamel defects and dental caries in very low birth weight adolescents with high risk (HR-VLBW) and low risk (LR-VLBW) compared to full-term (term) adolescents. Methods The sample consisted of 224 subjects (80 HR-VLBW, 59 LR-VLBW, 85 term adolescents) recruited from an ongoing longitudinal study. Sociodemographic and medical information was available from birth. Dental examination of the adolescent at the 14-year visit included: enamel defects (opacity and hypoplasia); decayed, missing, filled teeth of incisors and molars (DMFT-IM) and of overall permanent teeth (DMFT); Simplified Oral Hygiene Index for debris/calculus on teeth, and sealant presence. A caregiver questionnaire completed simultaneously assessed dental behavior, access, insurance status and prevention factors. Hierarchical analysis utilized the zero-inflated negative binomial model and zero-inflated Poisson model. Results The zero-inflated negative binomial model controlling for sociodemographic variables indicated that the LR-VLBW group had an estimated 75% increase (p < 0.05) in number of demarcated opacities in the incisors and first molar teeth compared to the term group. Hierarchical modeling indicated that demarcated opacities were a significant predictor of DMFT-IM after control for relevant covariates. The term adolescents had significantly increased DMFT-IM and DMFT scores compared to the LR-VLBW adolescents. Conclusion LR-VLBW was a significant risk factor for increased enamel defects in the permanent incisors and first molars. Term children had increased caries compared to the LR-VLBW group. The effect of birth group and enamel defects on caries has to be investigated longitudinally from birth. PMID:20975268

  12. Stability issues of nonlocal gravity during primordial inflation

    NASA Astrophysics Data System (ADS)

    Belgacem, Enis; Cusin, Giulia; Foffa, Stefano; Maggiore, Michele; Mancarella, Michele

    2018-01-01

    We study the cosmological evolution of some nonlocal gravity models, when the initial conditions are set during a phase of primordial inflation. We examine in particular three models, the so-called RT, RR and Δ4 models, previously introduced by our group. We find that, during inflation, the RT model has a viable background evolution, but at the level of cosmological perturbations develops instabilities that make it nonviable. In contrast, the RR and Δ4 models have a viable evolution even when their initial conditions are set during a phase of primordial inflation.

  13. Zero-Inertial Recession for a Kinematic Wave Model

    USDA-ARS?s Scientific Manuscript database

    Kinematic-wave models of surface irrigation assume a fixed relationship between depth and discharge (typically, normal depth). When surface irrigation inflow is cut off, the calculated upstream flow depth goes to zero, since the discharge is zero. For short time steps, use of the Kinematic Wave mode...

  14. Generic analysis of kinetically driven inflation

    NASA Astrophysics Data System (ADS)

    Saitou, Rio

    2018-04-01

    We perform a model-independent analysis of kinetically driven inflation (KDI) which (partially) includes generalized G-inflation and ghost inflation. We evaluate the background evolution splitting into the inflationary attractor and the perturbation around it. We also consider the quantum fluctuation of the scalar mode with a usual scaling and derive the spectral index, ignoring the contribution from the second-order products of slow-roll parameters. Using these formalisms, we find that within our generic framework the models of KDI which possess the shift symmetry of scalar field cannot create the quantum fluctuation consistent with the observation. Breaking the shift symmetry, we obtain a few essential conditions for viable models of KDI associated with the graceful exit.

  15. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    PubMed

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  16. Inflatable Dark Matter

    DOE PAGES

    Davoudiasl, Hooman; Hooper, Dan; McDermott, Samuel D.

    2016-01-22

    We describe a general scenario, dubbed “Inflatable Dark Matter”, in which the density of dark matter particles can be reduced through a short period of late-time inflation in the early universe. The overproduction of dark matter that is predicted within many otherwise well-motivated models of new physics can be elegantly remedied within this context, without the need to tune underlying parameters or to appeal to anthropic considerations. Thermal relics that would otherwise be disfavored can easily be accommodated within this class of scenarios, including dark matter candidates that are very heavy or very light. Furthermore, the non-thermal abundance of GUTmore » or Planck scale axions can be brought to acceptable levels, without invoking anthropic tuning of initial conditions. Additionally, a period of late-time inflation could have occurred over a wide range of scales from ~ MeV to the weak scale or above, and could have been triggered by physics within a hidden sector, with small but not necessarily negligible couplings to the Standard Model.« less

  17. Constraints on single-field inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico

    2016-06-28

    Many alternatives to canonical slow-roll inflation have been proposed over the years, one of the main motivations being to have a model, capable of generating observable values of non-Gaussianity. In this work, we (re-)explore the physical implications of a great majority of such models within a single, effective field theory framework (including novel models with large non-Gaussianity discussed for the first time below). The constraints we apply — both theoretical and experimental — are found to be rather robust, determined to a great extent by just three parameters: the coefficients of the quadratic EFT operators (δN){sup 2} and δNδE, andmore » the slow-roll parameter ε. This allows to significantly limit the majority of single-field alternatives to canonical slow-roll inflation. While the existing data still leaves some room for most of the considered models, the situation would change dramatically if the current upper limit on the tensor-to-scalar ratio decreased down to r<10{sup −2}. Apart from inflationary models driven by plateau-like potentials, the single-field model that would have a chance of surviving this bound is the recently proposed slow-roll inflation with weakly-broken galileon symmetry. In contrast to canonical slow-roll inflation, the latter model can support r<10{sup −2} even if driven by a convex potential, as well as generate observable values for the amplitude of non-Gaussianity.« less

  18. Attractor behaviour in multifield inflation

    NASA Astrophysics Data System (ADS)

    Carrilho, Pedro; Mulryne, David; Ronayne, John; Tenkanen, Tommi

    2018-06-01

    We study multifield inflation in scenarios where the fields are coupled non-minimally to gravity via ξI(phiI)n gμνRμν, where ξI are coupling constants, phiI the fields driving inflation, gμν the space-time metric, Rμν the Ricci tensor, and n>0. We consider the so-called α-attractor models in two formulations of gravity: in the usual metric case where Rμν=Rμν(gμν), and in the Palatini formulation where Rμν is an independent variable. As the main result, we show that, regardless of the underlying theory of gravity, the field-space curvature in the Einstein frame has no influence on the inflationary dynamics at the limit of large ξI, and one effectively retains the single-field case. However, the gravity formulation does play an important role: in the metric case the result means that multifield models approach the single-field α-attractor limit, whereas in the Palatini case the attractor behaviour is lost also in the case of multifield inflation. We discuss what this means for distinguishing between different models of inflation.

  19. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia.

    PubMed

    Park, Taeyoung; Krafty, Robert T; Sánchez, Alvaro I

    2012-07-27

    A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public.

  20. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  1. Low-Mass Inflation Systems for Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Thunnissen, Daniel P.; Webster, Mark S.; Engelbrecht, Carl S.

    1995-01-01

    The use of inflatable space structures has often been proposed for aerospace and planetary applications. Communication, power generation, and very-long-baseline interferometry are just three potential applications of inflatable technology. The success of inflatable structures depends on the development of an applications of inflatable technology. This paper describes two design studies performed to develop a low mass inflation system. The first study takes advantage of existing onboard propulsion gases to reduce the overall system mass. The second study assumes that there is no onboard propulsion system. Both studies employ advanced components developed for the Pluto fast flyby spacecraft to further reduce mass. The study examined four different types of systems: hydrazine, nitrogen and water, nitrogen, and xenon. This study shows that all of these systems can be built for a small space structure with masses lower than 0.5 kilograms.

  2. Inflation in a closed universe

    NASA Astrophysics Data System (ADS)

    Ratra, Bharat

    2017-11-01

    To derive a power spectrum for energy density inhomogeneities in a closed universe, we study a spatially-closed inflation-modified hot big bang model whose evolutionary history is divided into three epochs: an early slowly-rolling scalar field inflation epoch and the usual radiation and nonrelativistic matter epochs. (For our purposes it is not necessary to consider a final dark energy dominated epoch.) We derive general solutions of the relativistic linear perturbation equations in each epoch. The constants of integration in the inflation epoch solutions are determined from de Sitter invariant quantum-mechanical initial conditions in the Lorentzian section of the inflating closed de Sitter space derived from Hawking's prescription that the quantum state of the universe only include field configurations that are regular on the Euclidean (de Sitter) sphere section. The constants of integration in the radiation and matter epoch solutions are determined from joining conditions derived by requiring that the linear perturbation equations remain nonsingular at the transitions between epochs. The matter epoch power spectrum of gauge-invariant energy density inhomogeneities is not a power law, and depends on spatial wave number in the way expected for a generalization to the closed model of the standard flat-space scale-invariant power spectrum. The power spectrum we derive appears to differ from a number of other closed inflation model power spectra derived assuming different (presumably non de Sitter invariant) initial conditions.

  3. New type of hill-top inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barvinsky, A.O.; Department of Physics, Tomsk State University,Lenin Ave. 36, Tomsk 634050; Department of Physics and Astronomy, Pacific Institue for Theoretical Physics,University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1

    2016-01-20

    We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ϵ and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less

  4. New type of hill-top inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barvinsky, A.O.; Nesterov, D.V.; Kamenshchik, A.Yu., E-mail: barvin@td.lpi.ru, E-mail: Alexander.Kamenshchik@bo.infn.it, E-mail: nesterov@td.lpi.ru

    2016-01-01

    We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ε and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less

  5. Negative running can prevent eternal inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kinney, William H.; Freese, Katherine, E-mail: whkinney@buffalo.edu, E-mail: ktfreese@umich.edu

    Current data from the Planck satellite and the BICEP2 telescope favor, at around the 2 σ level, negative running of the spectral index of curvature perturbations from inflation. We show that for negative running α < 0, the curvature perturbation amplitude has a maximum on scales larger than our current horizon size. A condition for the absence of eternal inflation is that the curvature perturbation amplitude always remain below unity on superhorizon scales. For current bounds on n{sub S} from Planck, this corresponds to an upper bound of the running α < −9 × 10{sup −5}, so that even tiny running of the scalar spectral index ismore » sufficient to prevent eternal inflation from occurring, as long as the running remains negative on scales outside the horizon. In single-field inflation models, negative running is associated with a finite duration of inflation: we show that eternal inflation may not occur even in cases where inflation lasts as long as 10{sup 4} e-folds.« less

  6. Species abundance in a forest community in South China: A case of poisson lognormal distribution

    USGS Publications Warehouse

    Yin, Z.-Y.; Ren, H.; Zhang, Q.-M.; Peng, S.-L.; Guo, Q.-F.; Zhou, G.-Y.

    2005-01-01

    Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m??20 m, 5 m??5 m, and 1 m??1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (?? and ??) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the ?? and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/?? should be an alternative measure of diversity.

  7. Electrostatic forces in the Poisson-Boltzmann systems

    NASA Astrophysics Data System (ADS)

    Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2013-09-01

    Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models.

  8. Repairable-conditionally repairable damage model based on dual Poisson processes.

    PubMed

    Lind, B K; Persson, L M; Edgren, M R; Hedlöf, I; Brahme, A

    2003-09-01

    The advent of intensity-modulated radiation therapy makes it increasingly important to model the response accurately when large volumes of normal tissues are irradiated by controlled graded dose distributions aimed at maximizing tumor cure and minimizing normal tissue toxicity. The cell survival model proposed here is very useful and flexible for accurate description of the response of healthy tissues as well as tumors in classical and truly radiobiologically optimized radiation therapy. The repairable-conditionally repairable (RCR) model distinguishes between two different types of damage, namely the potentially repairable, which may also be lethal, i.e. if unrepaired or misrepaired, and the conditionally repairable, which may be repaired or may lead to apoptosis if it has not been repaired correctly. When potentially repairable damage is being repaired, for example by nonhomologous end joining, conditionally repairable damage may require in addition a high-fidelity correction by homologous repair. The induction of both types of damage is assumed to be described by Poisson statistics. The resultant cell survival expression has the unique ability to fit most experimental data well at low doses (the initial hypersensitive range), intermediate doses (on the shoulder of the survival curve), and high doses (on the quasi-exponential region of the survival curve). The complete Poisson expression can be approximated well by a simple bi-exponential cell survival expression, S(D) = e(-aD) + bDe(-cD), where the first term describes the survival of undamaged cells and the last term represents survival after complete repair of sublethal damage. The bi-exponential expression makes it easy to derive D(0), D(q), n and alpha, beta values to facilitate comparison with classical cell survival models.

  9. Assessment of Poisson, probit and linear models for genetic analysis of presence and number of black spots in Corriedale sheep.

    PubMed

    Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D

    2011-04-01

    Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.

  10. Issues on generating primordial anisotropies at the end of inflation

    NASA Astrophysics Data System (ADS)

    Emami, Razieh; Firouzjahi, Hassan

    2012-01-01

    We revisit the idea of generating primordial anisotropies at the end of inflation in models of inflation with gauge fields. To be specific we consider the charged hybrid inflation model where the waterfall field is charged under a U(1) gauge field so the surface of end of inflation is controlled both by inflaton and the gauge fields. Using δN formalism properly we find that the anisotropies generated at the end of inflation from the gauge field fluctuations are exponentially suppressed on cosmological scales. This is because the gauge field evolves exponentially during inflation while in order to generate appreciable anisotropies at the end of inflation the spectator gauge field has to be frozen. We argue that this is a generic feature, that is, one can not generate observable anisotropies at the end of inflation within an FRW background.

  11. Inflatable Structures Technology Handbook. Chapter 21; Inflatable Habitats

    NASA Technical Reports Server (NTRS)

    Kennedy, Kriss J.; Raboin, Jasen; Spexarth, Gary; Valle, Gerard

    2000-01-01

    The technologies required to design, fabricate, and utilize an inflatable module for space applications has been demonstrated and proven by the TransHab team during the development phase of the program. Through testing and hands-on development several issues about inflatable space structures have been addressed , such as: ease of manufacturing, structural integrity, micrometeorite protection, folding , and vacuum deployment. The TransHab inflatable technology development program has proven that not only are inflatable structures a viable option, but they also offer significant advantages over conventional metallic structures.

  12. RE-INFLATED WARM JUPITERS AROUND RED GIANTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez, Eric D.; Fortney, Jonathan J.

    2016-02-10

    Since the discovery of the first transiting hot Jupiters, models have sought to explain the anomalously large radii of highly irradiated gas giants. We now know that the size of hot Jupiter radius anomalies scales strongly with a planet's level of irradiation and numerous models like tidal heating, ohmic dissipation, and thermal tides have since been developed to help explain these inflated radii. In general, however, these models can be grouped into two broad categories: models that directly inflate planetary radii by depositing a fraction of the incident irradiation into the interior and models that simply slow a planet's radiativemore » cooling, allowing it to retain more heat from formation and thereby delay contraction. Here we present a new test to distinguish between these two classes of models. Gas giants orbiting at moderate orbital periods around post-main-sequence stars will experience enormous increases to their irradiation as their host stars move up the sub-giant and red-giant branches. If hot Jupiter inflation works by depositing irradiation into the planet's deep interiors then planetary radii should increase in response to the increased irradiation. This means that otherwise non-inflated gas giants at moderate orbital periods of >10 days can re-inflate as their host stars evolve. Here we explore the circumstances that can lead to the creation of these “re-inflated” gas giants and examine how the existence or absence of such planets can be used to place unique constraints on the physics of the hot Jupiter inflation mechanism. Finally, we explore the prospects for detecting this potentially important undiscovered population of planets.« less

  13. Cosmic Inflation

    ScienceCinema

    Lincoln, Don

    2018-01-16

    In 1964, scientists discovered a faint radio hiss coming from the heavens and realized that the hiss wasn’t just noise. It was a message from eons ago; specifically the remnants of the primordial fireball, cooled to about 3 degrees above absolute zero. Subsequent research revealed that the radio hiss was the same in every direction. The temperature of the early universe was uniform to at better than a part in a hundred thousand. And this was weird. According to the prevailing theory, the two sides of the universe have never been in contact. So how could two places that had never been in contact be so similar? One possible explanation was proposed in 1979. Called inflation, the theory required that early in the history of the universe, the universe expanded faster than the speed of light. Confused? Watch this video as Fermilab’s Dr. Don Lincoln makes sense of this mind-bending idea.

  14. Cosmic Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    In 1964, scientists discovered a faint radio hiss coming from the heavens and realized that the hiss wasn’t just noise. It was a message from eons ago; specifically the remnants of the primordial fireball, cooled to about 3 degrees above absolute zero. Subsequent research revealed that the radio hiss was the same in every direction. The temperature of the early universe was uniform to at better than a part in a hundred thousand. And this was weird. According to the prevailing theory, the two sides of the universe have never been in contact. So how could two places that hadmore » never been in contact be so similar? One possible explanation was proposed in 1979. Called inflation, the theory required that early in the history of the universe, the universe expanded faster than the speed of light. Confused? Watch this video as Fermilab’s Dr. Don Lincoln makes sense of this mind-bending idea.« less

  15. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    PubMed Central

    Hüls, Anke; Frömke, Cornelia; Ickstadt, Katja; Hille, Katja; Hering, Johanna; von Münchhausen, Christiane; Hartmann, Maria; Kreienbrock, Lothar

    2017-01-01

    Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i) to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model) and (ii) to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate model. PMID

  16. Unitarity and predictiveness in new Higgs inflation

    NASA Astrophysics Data System (ADS)

    Fumagalli, Jacopo; Mooij, Sander; Postma, Marieke

    2018-03-01

    In new Higgs inflation the Higgs kinetic terms are non-minimally coupled to the Einstein tensor, allowing the Higgs field to play the role of the inflaton. The new interaction is non-renormalizable, and the model only describes physics below some cutoff scale. Even if the unknown UV physics does not affect the tree level inflaton potential significantly, it may still enter at loop level and modify the running of the Standard Model (SM) parameters. This is analogous to what happens in the original model for Higgs inflation. A key difference, though, is that in new Higgs inflation the inflationary predictions are sensitive to this running. Thus the boundary conditions at the EW scale as well as the unknown UV completion may leave a signature on the inflationary parameters. However, this dependence can be evaded if the kinetic terms of the SM fermions and gauge fields are non-minimally coupled to gravity as well. Our approach to determine the model's UV dependence and the connection between low and high scale physics can be used in any particle physics model of inflation.

  17. Goldstone inflation

    NASA Astrophysics Data System (ADS)

    Croon, Djuna; Sanz, Verónica; Setford, Jack

    2015-10-01

    Identifying the inflaton with a pseudo-Goldstone boson explains the flatness of its potential. Successful Goldstone Inflation should also be robust against UV corrections, such as from quantum gravity: in the language of the effective field theory this implies that all scales are sub-Planckian. In this paper we present scenarios which realise both requirements by examining the structure of Goldstone potentials arising from Coleman-Weinberg contributions. We focus on single-field models, for which we notice that both bosonic and fermionic contributions are required and that spinorial fermion representations can generate the right potential shape. We then evaluate the constraints on non-Gaussianity from higher-derivative interactions, finding that axiomatic constraints on Goldstone boson scattering prevail over the current CMB measurements. The fit to CMB data can be connected to the UV completions for Goldstone Inflation, finding relations in the spectrum of new resonances. Finally, we show how hybrid inflation can be realised in the same context, where both the inflaton and the waterfall fields share a common origin as Goldstones.

  18. Supernatural A-Term Inflation

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Min; Cheung, Kingman

    Following Ref. 10, we explore the parameter space of the case when the supersymmetry (SUSY) breaking scale is lower, for example, in gauge mediated SUSY breaking model. During inflation, the form of the potential is V0 plus MSSM (or A-term) inflation. We show that the model works for a wide range of the potential V0 with the soft SUSY breaking mass m O(1) TeV. The implication to MSSM (or A-term) inflation is that the flat directions which is lifted by the non-renormalizable terms described by the superpotential W=λ p φ p-1/Mp-3 P with p = 4 and p = 5 are also suitable to be an inflaton field for λp = O(1) provided there is an additional false vacuum term V0 with appropriate magnitude. The flat directions correspond to p = 6 also works for 0 < ˜ V0/M_ P4 < ˜ 10-40.

  19. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  20. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum

  1. Analysis of multinomial models with unknown index using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, R.M.; Link, W.A.

    2007-01-01

    Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.

  2. Inflation and Growth: Positive or Negative Relationship?

    NASA Astrophysics Data System (ADS)

    Berument, Hakan; Inamlik, Ali; Olgun, Hasan

    This study has been motivated by two developments. Firstly, by the vast literature on the relationship between inflation and growth which is abundantly endowed with diverse theoretical explanations and contradictory evidence and by the unique experience of the Turkish economy with inflation and growth. A preliminary examination of the Turkish data pointed to a negative relation between inflation and growth. Moreover, there is a unanimous agreement among the students of the Turkish economy that many factors have contributed to inflation in this country. In view of these facts this paper employs a VAR model which will enable us to identify the sources of the shocks and control for external factors. In addition VAR models have a high predictive power and enable the researcher to observe the impulse response functions. The study employs Generalised Impulse Response analysis. In the empirical experiments oil prices, money supply, government spending and taxes have been taken as the most likely determinants of inflation. The study shows that there is a negative relationship between inflation and output growth in Turkey and that the underlying explanatory factor is the real exchange rate. This result is robust.

  3. A Boussinesq-scaled, pressure-Poisson water wave model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint

    2015-02-01

    Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.

  4. Clockwork inflation

    NASA Astrophysics Data System (ADS)

    Kehagias, Alex; Riotto, Antonio

    2017-04-01

    We investigate the recently proposed clockwork mechanism delivering light degrees of freedom with suppressed interactions and show, with various examples, that it can be efficiently implemented in inflationary scenarios to generate flat inflaton potentials and small density perturbations without fine-tunings. We also study the clockwork graviton in de Sitter and, interestingly, we find that the corresponding clockwork charge is site-dependent. As a consequence, the amount of tensor modes is generically suppressed with respect to the standard cases where the clockwork set-up is not adopted. This point can be made a virtue in resurrecting models of inflation which were supposed to be ruled out because of the excessive amount of tensor modes from inflation.

  5. 2-dimensional models of rapidly rotating stars I. Uniformly rotating zero age main sequence stars

    NASA Astrophysics Data System (ADS)

    Roxburgh, I. W.

    2004-12-01

    We present results for 2-dimensional models of rapidly rotating main sequence stars for the case where the angular velocity Ω is constant throughout the star. The algorithm used solves for the structure on equipotential surfaces and iteratively updates the total potential, solving Poisson's equation by Legendre polynomial decomposition; the algorithm can readily be extended to include rotation constant on cylinders. We show that this only requires a small number of Legendre polynomials to accurately represent the solution. We present results for models of homogeneous zero age main sequence stars of mass 1, 2, 5, 10 M⊙ with a range of angular velocities up to break up. The models have a composition X=0.70, Z=0.02 and were computed using the OPAL equation of state and OPAL/Alexander opacities, and a mixing length model of convection modified to include the effect of rotation. The models all show a decrease in luminosity L and polar radius Rp with increasing angular velocity, the magnitude of the decrease varying with mass but of the order of a few percent for rapid rotation, and an increase in equatorial radius Re. Due to the contribution of the gravitational multipole moments the parameter Ω2 Re3/GM can exceed unity in very rapidly rotating stars and Re/Rp can exceed 1.5.

  6. New Old Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dvali, Gia

    2003-10-03

    We propose a new class of inflationary solutions to the standard cosmological problems (horizon, flatness, monopole,...), based on a modification of old inflation. These models do not require a potential which satisfies the normal inflationary slow-roll conditions. Our universe arises from a single tunneling event as the inflaton leaves the false vacuum. Subsequent dynamics (arising from either the oscillations of the inflaton field or thermal effects) keep a second field trapped in a false minimum, resulting in an evanescent period of inflation (with roughly 50 e-foldings) inside the bubble. This easily allows the bubble to grow sufficiently large to containmore » our present horizon volume. Reheating is accomplished when the inflaton driving the last stage of inflation rolls down to the true vacuum, and adiabatic density perturbations arise from moduli-dependent Yukawa couplings of the inflaton to matter fields. Our scenario has several robust predictions, including virtual absence of gravity waves, a possible absence of tilt in scalar perturbations, and a higher degree of non-Gaussianity than other models. It also naturally incorporates a solution to the cosmological moduli problem.« less

  7. Fisher zeros and conformality in lattice models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meurice, Yannick; Bazavov, Alexei; Berg, Bernd A.

    2012-10-01

    Fisher zeros are the zeros of the partition function in the complex beta=2N_c/g^2 plane. When they pinch the real axis, finite size scaling allows one to distinguish between first and second order transition and to estimate exponents. On the other hand, a gap signals confinement and the method can be used to explore the boundary of the conformal window. We present recent numerical results for 2D O(N) sigma models, 4D U(1) and SU(2) pure gauge and SU(3) gauge theory with N_f=4 and 12 flavors. We discuss attempts to understand some of these results using analytical methods. We discuss the 2-latticemore » matching and qualitative aspects of the renormalization group (RG) flows in the Migdal-Kadanoff approximation, in particular how RG flows starting at large beta seem to move around regions where bulk transitions occur. We consider the effects of the boundary conditions on the nonperturbative part of the average energy and on the Fisher zeros for the 1D O(2) model.« less

  8. Deflation of the cosmological constant associated with inflation and dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Chao-Qiang; Lee, Chung-Chi, E-mail: geng@phys.nthu.edu.tw, E-mail: chungchi@mx.nthu.edu.tw

    2016-06-01

    In order to solve the fine-tuning problem of the cosmological constant, we propose a simple model with the vacuum energy non-minimally coupled to the inflaton field. In this model, the vacuum energy decays to the inflaton during pre-inflation and inflation eras, so that the cosmological constant effectively deflates from the Planck mass scale to a much smaller one after inflation and plays the role of dark energy in the late-time of the universe. We show that our deflationary scenario is applicable to arbitrary slow-roll inflation models. We also take two specific inflation potentials to illustrate our results.

  9. Cytomegalovirus Reinfections Stimulate CD8 T-Memory Inflation.

    PubMed

    Trgovcich, Joanne; Kincaid, Michelle; Thomas, Alicia; Griessl, Marion; Zimmerman, Peter; Dwivedi, Varun; Bergdall, Valerie; Klenerman, Paul; Cook, Charles H

    2016-01-01

    Cytomegalovirus (CMV) has been shown to induce large populations of CD8 T-effector memory cells that unlike central memory persist in large quantities following infection, a phenomenon commonly termed "memory inflation". Although murine models to date have shown very large and persistent CMV-specific T-cell expansions following infection, there is considerable variability in CMV-specific T-memory responses in humans. Historically such memory inflation in humans has been assumed a consequence of reactivation events during the life of the host. Because basic information about CMV infection/re-infection and reactivation in immune competent humans is not available, we used a murine model to test how primary infection, reinfection, and reactivation stimuli influence memory inflation. We show that low titer infections induce "partial" memory inflation of both mCMV specific CD8 T-cells and antibody. We show further that reinfection with different strains can boost partial memory inflation. Finally, we show preliminary results suggesting that a single strong reactivation stimulus does not stimulate memory inflation. Altogether, our results suggest that while high titer primary infections can induce memory inflation, reinfections during the life of a host may be more important than previously appreciated.

  10. Intermediate inflation from a non-canonical scalar field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezazadeh, K.; Karami, K.; Karimi, P., E-mail: rezazadeh86@gmail.com, E-mail: KKarami@uok.ac.ir, E-mail: parvin.karimi67@yahoo.com

    2015-09-01

    We study the intermediate inflation in a non-canonical scalar field framework with a power-like Lagrangian. We show that in contrast with the standard canonical intermediate inflation, our non-canonical model is compatible with the observational results of Planck 2015. Also, we estimate the equilateral non-Gaussianity parameter which is in well agreement with the prediction of Planck 2015. Then, we obtain an approximation for the energy scale at the initial time of inflation and show that it can be of order of the Planck energy scale, i.e. M{sub P} ∼ 10{sup 18}GeV. We will see that after a short period of time, inflation entersmore » in the slow-roll regime that its energy scale is of order M{sub P}/100 ∼ 10{sup 16}GeV and the horizon exit takes place in this energy scale. We also examine an idea in our non-canonical model to overcome the central drawback of intermediate inflation which is the fact that inflation never ends. We solve this problem without disturbing significantly the nature of the intermediate inflation until the time of horizon exit.« less

  11. Evading the Lyth bound in hybrid natural inflation

    NASA Astrophysics Data System (ADS)

    Hebecker, A.; Kraus, S. C.; Westphal, A.

    2013-12-01

    Generically, the gravitational-wave or tensor-mode contribution to the primordial curvature spectrum of inflation is tiny if the field range of the inflaton is much smaller than the Planck scale. We show that this pessimistic conclusion is naturally avoided in a rather broad class of small-field models. More specifically, we consider models where an axionlike shift symmetry keeps the inflaton potential flat (up to nonperturbative cosine-shaped modulations), but inflation nevertheless ends in a waterfall regime, as is typical for hybrid inflation. In such hybrid natural inflation scenarios (examples are provided by Wilson line inflation and fluxbrane inflation), the slow-roll parameter ɛ can be sizable during an early period (relevant for the cosmic microwave background spectrum). Subsequently, ɛ quickly becomes very small before the tachyonic instability eventually terminates the slow-roll regime. In this scenario, one naturally generates a considerable tensor-mode contribution in the curvature spectrum, collecting nevertheless the required amount of e-foldings during the final period of inflation. While nonobservation of tensors by Planck is certainly not a problem, a discovery in the medium- to long-term future is realistic.

  12. Reheating-volume measure for random-walk inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winitzki, Sergei; Yukawa Institute of Theoretical Physics, Kyoto University, Kyoto

    2008-09-15

    The recently proposed 'reheating-volume' (RV) measure promises to solve the long-standing problem of extracting probabilistic predictions from cosmological multiverse scenarios involving eternal inflation. I give a detailed description of the new measure and its applications to generic models of eternal inflation of random-walk type. For those models I derive a general formula for RV-regulated probability distributions that is suitable for numerical computations. I show that the results of the RV cutoff in random-walk type models are always gauge invariant and independent of the initial conditions at the beginning of inflation. In a toy model where equal-time cutoffs lead to themore » 'youngness paradox', the RV cutoff yields unbiased results that are distinct from previously proposed measures.« less

  13. Gravitational waves and large field inflation

    NASA Astrophysics Data System (ADS)

    Linde, Andrei

    2017-02-01

    According to the famous Lyth bound, one can confirm large field inflation by finding tensor modes with sufficiently large tensor-to-scalar ratio r. Here we will try to answer two related questions: is it possible to rule out all large field inflationary models by not finding tensor modes with r above some critical value, and what can we say about the scale of inflation by measuring r? However, in order to answer these questions one should distinguish between two different definitions of the large field inflation and three different definitions of the scale of inflation. We will examine these issues using the theory of cosmological α-attractors as a convenient testing ground.

  14. Leptogenesis from gravity waves in models of inflation.

    PubMed

    Alexander, Stephon H S; Peskin, Michael E; Sheikh-Jabbari, M M

    2006-03-03

    We present a new mechanism for creating the observed cosmic matter-antimatter asymmetry which satisfies all three Sakharov conditions from one common thread, gravitational waves. We generate lepton number through the gravitational anomaly in the lepton number current. The source term comes from elliptically polarized gravity waves that are produced during inflation if the inflaton field contains a CP-odd component. The amount of matter asymmetry generated in our model can be of realistic size for the parameters within the range of some inflationary scenarios and grand unified theories.

  15. Pati-Salam version of subcritical hybrid inflation

    NASA Astrophysics Data System (ADS)

    Bryant, B. Charles; Raby, Stuart

    2016-05-01

    In this paper we present a model of subcritical hybrid inflation with a Pati-Salam (PS) symmetry group. Both the inflaton and waterfall fields contribute to the necessary e -foldings of inflation, while only the waterfall field spontaneously breaks PS hence monopoles produced during inflation are diluted during the inflationary epoch. The model is able to produce a tensor-to-scalar ratio, r <0.09 consistent with the latest BICEP2/Keck and Planck data, as well as scalar density perturbations and spectral index, ns, consistent with Planck data. For particular values of the parameters, we find r =0.084 and ns=0.0963 . The energy density during inflation is directly related to the PS breaking scale, vPS. The model also incorporates a Z4R symmetry which can resolve the μ problem and suppress dimension 5 operators for proton decay, leaving over an exact R parity. Finally the model allows for a complete three-family extension with a D4 family symmetry which reproduces low energy precision electroweak and LHC data.

  16. Map scale effects on estimating the number of undiscovered mineral deposits

    USGS Publications Warehouse

    Singer, D.A.; Menzie, W.D.

    2008-01-01

    Estimates of numbers of undiscovered mineral deposits, fundamental to assessing mineral resources, are affected by map scale. Where consistently defined deposits of a particular type are estimated, spatial and frequency distributions of deposits are linked in that some frequency distributions can be generated by processes randomly in space whereas others are generated by processes suggesting clustering in space. Possible spatial distributions of mineral deposits and their related frequency distributions are affected by map scale and associated inclusions of non-permissive or covered geological settings. More generalized map scales are more likely to cause inclusion of geologic settings that are not really permissive for the deposit type, or that include unreported cover over permissive areas, resulting in the appearance of deposit clustering. Thus, overly generalized map scales can cause deposits to appear clustered. We propose a model that captures the effects of map scale and the related inclusion of non-permissive geologic settings on numbers of deposits estimates, the zero-inflated Poisson distribution. Effects of map scale as represented by the zero-inflated Poisson distribution suggest that the appearance of deposit clustering should diminish as mapping becomes more detailed because the number of inflated zeros would decrease with more detailed maps. Based on observed worldwide relationships between map scale and areas permissive for deposit types, mapping at a scale with twice the detail should cut permissive area size of a porphyry copper tract to 29% and a volcanic-hosted massive sulfide tract to 50% of their original sizes. Thus some direct benefits of mapping an area at a more detailed scale are indicated by significant reductions in areas permissive for deposit types, increased deposit density and, as a consequence, reduced uncertainty in the estimate of number of undiscovered deposits. Exploration enterprises benefit from reduced areas requiring

  17. Fractional Poisson Fields and Martingales

    NASA Astrophysics Data System (ADS)

    Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely

    2018-02-01

    We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.

  18. Does the first chaotic inflation model in supergravity provide the best fit to the Planck data?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linde, Andrei

    2015-02-23

    I describe the first model of chaotic inflation in supergravity, which was proposed by Goncharov and the present author in 1983. The inflaton potential of this model has a plateau-type behavior V{sub 0}(1−(8/3) e{sup −√6|ϕ|}) at large values of the inflaton field. This model predicts n{sub s}=1−(2/N)≈0.967 and r=(4/(3N{sup 2}))≈4×10{sup −4}, in good agreement with the Planck data. I propose a slight generalization of this model, which allows to describe not only inflation but also dark energy and supersymmetry breaking.

  19. Development of Inflatable Entry Systems Technologies

    NASA Technical Reports Server (NTRS)

    Player, Charles J.; Cheatwood, F. McNeil; Corliss, James

    2005-01-01

    Achieving the objectives of NASA s Vision for Space Exploration will require the development of new technologies, which will in turn require higher fidelity modeling and analysis techniques, and innovative testing capabilities. Development of entry systems technologies can be especially difficult due to the lack of facilities and resources available to test these new technologies in mission relevant environments. This paper discusses the technology development process to bring inflatable aeroshell technology from Technology Readiness Level 2 (TRL-2) to TRL-7. This paper focuses mainly on two projects: Inflatable Reentry Vehicle Experiment (IRVE), and Inflatable Aeroshell and Thermal Protection System Development (IATD). The objectives of IRVE are to conduct an inflatable aeroshell flight test that demonstrates exoatmospheric deployment and inflation, reentry survivability and stability, and predictable drag performance. IATD will continue the development of the technology by conducting exploration specific trade studies and feeding forward those results into three more flight tests. Through an examination of these projects, and other potential projects, this paper discusses some of the risks, issues, and unexpected benefits associated with the development of inflatable entry systems technology.

  20. Structural testing and analysis of a braided, inflatable fabric torus structure

    NASA Astrophysics Data System (ADS)

    Young, Andrew C.; Davids, William G.; Whitney, Daniel J.; Clapp, Joshua D.; Goupee, Andrew J.

    2017-10-01

    Inflatable structural members have military, disaster relief, aerospace and other important applications as they possess low mass, can be stored in a relatively small volume and have significant load-carrying capacity once pressurized. Of particular interest to the present research is the Hypersonic Inflatable Aerodynamic Decelerator (HIAD) structure under development by NASA. In order to make predictions about the structural response of the HIAD system, it is necessary to understand the response of individual inflatable tori composing the HIAD structure. These inflatable members present unique challenges to structural testing and modeling due to their internal inflation pressure and relative compliance. Structural testing was performed on a braided, inflatable, toroidal structural member with axial reinforcing cords. The internal inflation pressure, magnitude of enforced displacement and loading methodology were varied. In-plane and out-of-plane experimental results were compared to model predictions using a three dimensional, corotational, flexibility-based fiber-beam finite element model including geometric and material nonlinearities, as well as the effects of inflation pressure. It was found that in order to approximate the load-deformation response observed in experimentation it is necessary to carefully control the test and model boundary conditions and loading scheme.

  1. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    PubMed

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  2. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

    PubMed Central

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398

  3. Observational constraints on Tachyon and DBI inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sheng; Liddle, Andrew R., E-mail: sl277@sussex.ac.uk, E-mail: arl@roe.ac.uk

    2014-03-01

    We present a systematic method for evaluation of perturbation observables in non-canonical single-field inflation models within the slow-roll approximation, which allied with field redefinitions enables predictions to be established for a wide range of models. We use this to investigate various non-canonical inflation models, including Tachyon inflation and DBI inflation. The Lambert W function will be used extensively in our method for the evaluation of observables. In the Tachyon case, in the slow-roll approximation the model can be approximated by a canonical field with a redefined potential, which yields predictions in better agreement with observations than the canonical equivalents. Formore » DBI inflation models we consider contributions from both the scalar potential and the warp geometry. In the case of a quartic potential, we find a formula for the observables under both non-relativistic (sound speed c{sub s}{sup 2} ∼ 1) and relativistic behaviour (c{sub s}{sup 2} || 1) of the scalar DBI inflaton. For a quadratic potential we find two branches in the non-relativistic c{sub s}{sup 2} ∼ 1 case, determined by the competition of model parameters, while for the relativistic case c{sub s}{sup 2} → 0, we find consistency with results already in the literature. We present a comparison to the latest Planck satellite observations. Most of the non-canonical models we investigate, including the Tachyon, are better fits to data than canonical models with the same potential, but we find that DBI models in the slow-roll regime have difficulty in matching the data.« less

  4. A design pathfinder with material correlation points for inflatable systems

    NASA Astrophysics Data System (ADS)

    Fulcher, Jared Terrell

    The incorporation of inflatable structures into aerospace systems can produce significant advantages in stowed volume to mechanical effectiveness and overall weight. Many applications of these ultra-lightweight systems are designed to precisely control internal or external surfaces, or both, to achieve desired performance. The modeling of these structures becomes complex due to the material nonlinearities inherent to the majority of construction materials used in inflatable structures. Furthermore, accurately modeling the response and behavior of the interfacing boundaries that are common to many inflatable systems will lead to better understanding of the entire class of structures. The research presented involved using nonlinear finite element simulations correlated with photogrammetry testing to develop a procedure for defining material properties for commercially available polyurethane-coated woven nylon fabric, which is representative of coated materials that have been proven materials for use in many inflatable systems. Further, the new material model was used to design and develop an inflatable pathfinder system which employs only internal pressure to control an assembly of internal membranes. This canonical inflatable system will be used for exploration and development of general understanding of efficient design methodology and analysis of future systems. Canonical structures are incorporated into the design of the phased pathfinder system to allow for more universal insight. Nonlinear finite element simulations were performed to evaluate the effect of various boundary conditions, loading configurations, and material orientations on the geometric precision of geometries representing typical internal/external surfaces commonly incorporated into inflatable pathfinder system. The response of the inflatable system to possible damage was also studied using nonlinear finite element simulations. Development of a correlated material model for analysis of the

  5. On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action

    NASA Astrophysics Data System (ADS)

    Chekhov, L. O.; Mazzocco, M.

    2017-12-01

    Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles.

  6. Inflation with a constant rate of roll

    NASA Astrophysics Data System (ADS)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by ̈phi/H dot phi remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.

  7. Origin of density fluctuations in extended inflation

    NASA Technical Reports Server (NTRS)

    Kolb, Edward W.; Salopek, David S.; Turner, Michael S.

    1990-01-01

    The density fluctuations (both curvature and isocurvature) that arise due to quantum fluctuations in a simple model of extended inflation based upon the Jordan-Brans-Dicke theory are calculated. Curvature fluctuations arise due to quantum fluctuations in the Brans-Dicke field, in general have a nonscale-invariant spectrum, and can have an amplitude that is cosmologically acceptable and interesting without having to tune any coupling constant to a very small value. The density perturbations that arise due to the inflation field are subdominant. If there are other massless fields in the theory, e.g., an axion or an ilion, then isocurvature fluctuations arise in these fields too. Production of gravitational waves and the massless particles associated with excitations of the Brans-Dicke field are also discussed. Several attempts at more realistic models of extended inflation are also analyzed. The importance of the Einstein conformal frame in calculating curvature fluctuations is emphasized. When viewed in this frame, extended inflation closely resembles slow-rollover inflation with an exponential potential and the usual formula for the amplitude of curvature perturbations applies.

  8. Inflation with a constant rate of roll

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi, E-mail: motohashi@kicp.uchicago.edu, E-mail: alstar@landau.ac.ru, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by {sup ··}φ/H φ-dot remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs formore » unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.« less

  9. Last stand of single small field inflation

    NASA Astrophysics Data System (ADS)

    Bramante, Joseph; Lehman, Landon; Martin, Adam; Downes, Sean

    2014-07-01

    By incorporating both the tensor-to-scalar ratio and the measured value of the spectral index, we set a bound on solo small field inflation of Δϕ/mPl≥1.00√r/0.1 . Unlike previous bounds which require monotonic ɛV, |ηV|<1, and 60 e-folds of inflation, the bound remains valid for nonmonotonic ɛV, |ηV|≳1, and for inflation which occurs only over the eight e-folds which have been observed on the cosmic microwave background. The negative value of the spectral index over the observed eight e-folds is what makes the bound strong; we illustrate this by surveying single field models and finding that for r ≳0.1 and eight e-folds of inflation, there is no simple potential which reproduces observed cosmic microwave background perturbations and remains sub-Planckian. Models that are sub-Planckian after eight e-folds must be patched together with a second epoch of inflation that fills out the remaining ˜50 e-folds. This second, post-cosmic microwave background epoch is characterized by extremely small ɛV and therefore an increasing scalar power spectrum. Using the fact that large power can overabundantly produce primordial black holes, we bound the maximum energy level of the second phase of inflation.

  10. G-warm inflation

    NASA Astrophysics Data System (ADS)

    Herrera, Ramón

    2017-05-01

    A warm inflationary universe in the context of Galileon model or G-model is studied. Under a general formalism we study the inflationary dynamics and the cosmological perturbations considering a coupling of the form G(phi,X)=g(phi) X. As a concrete example, we consider an exponential potential together with the cases in which the dissipation and Galilean coefficients are constants. Also, we study the weak regime given by the condition R<1+3gHdot phi, and the strong regime in which 1inflation, assuming the condition for warm inflation in which the temperature T>H, the conditions or the weak and strong regimes, together with the consistency relation r=r(ns) from Planck data.

  11. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  13. A general framework of automorphic inflation

    NASA Astrophysics Data System (ADS)

    Schimmrigk, Rolf

    2016-05-01

    Automorphic inflation is an application of the framework of automorphic scalar field theory, based on the theory of automorphic forms and representations. In this paper the general framework of automorphic and modular inflation is described in some detail, with emphasis on the resulting stratification of the space of scalar field theories in terms of the group theoretic data associated to the shift symmetry, as well as the automorphic data that specifies the potential. The class of theories based on Eisenstein series provides a natural generalization of the model of j-inflation considered previously.

  14. Adiabatic elimination for systems with inertia driven by compound Poisson colored noise.

    PubMed

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2014-02-01

    We consider the dynamics of systems driven by compound Poisson colored noise in the presence of inertia. We study the limit when the frictional relaxation time and the noise autocorrelation time both tend to zero. We show that the Itô and Marcus stochastic calculuses naturally arise depending on these two time scales, and an extra intermediate type occurs when the two time scales are comparable. This leads to three different limiting regimes which are supported by numerical simulations. Furthermore, we establish that when the resulting compound Poisson process tends to the Wiener process in the frequent jump limit the Itô and Marcus calculuses, respectively, tend to the classical Itô and Stratonovich calculuses for Gaussian white noise, and the crossover type calculus tends to a crossover between the Itô and Stratonovich calculuses. Our results would be very helpful for understanding relevant experiments when jump type noise is involved.

  15. The effect of cuff presence and cuff inflation on airway pressure in a canine tracheostomy tube model.

    PubMed

    Wignall, Jamie R; Baines, Stephen J

    2014-01-01

    To evaluate the effect of cuff presence and cuff inflation on airway pressure in an inspiratory model of canine tracheostomy. Ex vivo experimental study. Cadaver tracheas from Beagle dogs were attached aborally to a vacuum. Airway pressure and flow rate was measured before and after placement of tracheostomy tubes. None. Adult uncuffed tubes and cuffed tracheostomy tubes (sizes 4, 6, 8, and 10) were placed within tracheas. Cuffs were investigated without inflation and at maximum cuff inflation. Airway pressure was measured at constant airflow rates at 30 and 60 L/min. At set flow rates, airway pressures of tracheostomy tubes were compared to the intact trachea. A size 4 uncuffed tracheostomy tube showed the lowest airway pressure and a size 4 cuffed trachestomy tube with inflation showed the highest airway pressures. For sizes 6, 8, and 10 tubes, the presence of a cuff with and without inflation significantly increased airway pressure. Inflation of a cuff always significantly increased airway pressure. Similar pressure is seen between sizes 4 and 6 uncuffed tubes. Cuffed tracheostomy tubes should not be used unless specifically indicated due to increased airway pressure. © Veterinary Emergency and Critical Care Society 2013.

  16. Estimating cavity tree and snag abundance using negative binomial regression models and nearest neighbor imputation methods

    Treesearch

    Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett

    2009-01-01

    Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....

  17. Power suppression at large scales in string inflation

    NASA Astrophysics Data System (ADS)

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  18. Temperature of the inflaton and duration of inflation from Wilkinson microwave anisotropy probe data.

    PubMed

    Bhattacharya, Kaushik; Mohanty, Subhendra; Rangarajan, Raghavan

    2006-03-31

    If the initial state of the inflaton field is taken to have a thermal distribution instead of the conventional zero particle vacuum state then the curvature power spectrum gets modified by a temperature dependent factor such that the fluctuation spectrum of the microwave background radiation is enhanced at larger angles. We compare this modified cosmic microwave background spectrum with Wilkinson microwave anisotropy probe data to obtain an upper bound on the temperature of the inflaton at the time our current horizon crossed the horizon during inflation. We further conclude that there must be additional -foldings of inflation beyond what is needed to solve the horizon problem.

  19. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  20. Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading

    DOE PAGES

    Sanborn, Brett; Song, Bo

    2018-06-03

    Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less

  1. Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanborn, Brett; Song, Bo

    Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less

  2. Spacetime Curvature and Higgs Stability after Inflation.

    PubMed

    Herranen, M; Markkanen, T; Nurmi, S; Rajantie, A

    2015-12-11

    We investigate the dynamics of the Higgs field at the end of inflation in the minimal scenario consisting of an inflaton field coupled to the standard model only through the nonminimal gravitational coupling ξ of the Higgs field. Such a coupling is required by renormalization of the standard model in curved space, and in the current scenario also by vacuum stability during high-scale inflation. We find that for ξ≳1, rapidly changing spacetime curvature at the end of inflation leads to significant production of Higgs particles, potentially triggering a transition to a negative-energy Planck scale vacuum state and causing an immediate collapse of the Universe.

  3. The chaotic regime of D-term inflation

    NASA Astrophysics Data System (ADS)

    Buchmüller, W.; Domcke, V.; Schmitz, K.

    2014-11-01

    We consider D-term inflation for small couplings of the inflaton to matter fields. Standard hybrid inflation then ends at a critical value of the inflaton field that exceeds the Planck mass. During the subsequent waterfall transition the inflaton continues its slow-roll motion, whereas the waterfall field rapidly grows by quantum fluctuations. Beyond the decoherence time, the waterfall field becomes classical and approaches a time-dependent minimum, which is determined by the value of the inflaton field and the self-interaction of the waterfall field. During the final stage of inflation, the effective inflaton potential is essentially quadratic, which leads to the standard predictions of chaotic inflation. The model illustrates how the decay of a false vacuum of GUT-scale energy density can end in a period of `chaotic inflation'.

  4. Main sequence models for massive zero-metal stars

    NASA Technical Reports Server (NTRS)

    Cary, N.

    1974-01-01

    Zero-age main-sequence models for stars of 20, 10, 5, and 2 solar masses with no heavy elements are constructed for three different possible primordial helium abundances: Y=0.00, Y=0.23, and Y=0.30. The latter two values of Y bracket the range of primordial helium abundances cited by Wagoner. With the exceptions of the two 20 solar mass models that contain helium, these models are found to be self-consistent in the sense that the formation of carbon through the triple-alpha process during premain sequence contraction is not sufficient to bring the CN cycle into competition with the proton-proton chain on the ZAMS. The zero-metal models of the present study have higher surface and central temperatures, higher central densities, smaller radii, and smaller convective cores than do the population I models with the same masses.

  5. The interactive bending wrinkling behaviour of inflated beams

    PubMed Central

    Liu, Y. P.; Tan, H. F.; Wadee, M. K.

    2016-01-01

    A model is proposed based on a Fourier series method to analyse the interactive bending wrinkling behaviour of inflated beams. The whole wrinkling evolution is tracked and divided into three stages by identifying the bifurcations of the equilibrium path. The critical wrinkling and failure moments of the inflated beam can then be predicted. The global–local interactive buckling pattern is elucidated by the proposed theoretical model and also verified by non-contact experimental tests. The effects of geometric parameters, internal pressure and boundary conditions on the buckling of inflated beams are investigated finally. The results reveal that the interactive buckling characteristics of an inflated beam under bending are more sensitive to the dimensions of the structure and boundary conditions. We find that for beams which are simply supported at both ends or clamped and simply supported, boundary conditions may prevent the wrinkling formation. The results provide significant support for our understanding of the bending wrinkling behaviour of inflated beams. PMID:27713665

  6. Is inflation from unwinding fluxes IIB?

    NASA Astrophysics Data System (ADS)

    Gautason, Fridrik Freyr; Schillo, Marjorie; Van Riet, Thomas

    2017-03-01

    In this paper we argue that the mechanism of unwinding inflation is naturally present in warped compactifications of type IIB string theory with local throats. The unwinding of flux is caused by its annihilation against branes. The resulting inflaton potential is linear with periodic modulations. We initiate an analysis of the inflationary dynamics and cosmological observables, which are highly constrained by moduli stabilization. For the simplified model of single-Kähler Calabi-Yau spaces we find that many, though not all of the consistency constraints can be satisfied. Particularly, in this simple model geometric constraints are in tension with obtaining the observed amplitude of the scalar power spectrum. However, we do find 60 efolds of inflation with a trans-Planckian field excursion which offers the hope that slightly more complicated models can lead to a fully consistent explicit construction of large field inflation of this kind.

  7. Curvature perturbation and waterfall dynamics in hybrid inflation

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Firouzjahi, Hassan; Sasaki, Misao

    2011-10-01

    We investigate the parameter spaces of hybrid inflation model with special attention paid to the dynamics of waterfall field and curvature perturbations induced from its quantum fluctuations. Depending on the inflaton field value at the time of phase transition and the sharpness of the phase transition inflation can have multiple extended stages. We find that for models with mild phase transition the induced curvature perturbation from the waterfall field is too large to satisfy the COBE normalization. We investigate the model parameter space where the curvature perturbations from the waterfall quantum fluctuations vary between the results of standard hybrid inflation and the results obtained here.

  8. Dynamic Deployment Simulations of Inflatable Space Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    2005-01-01

    The feasibility of using Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method in LSDYNA to simulate the dynamic deployment of inflatable space structures is investigated. The CV and ALE methods were used to predict the inflation deployments of three folded tube configurations. The CV method was found to be a simple and computationally efficient method that may be adequate for modeling slow inflation deployment sine the inertia of the inflation gas can be neglected. The ALE method was found to be very computationally intensive since it involves the solving of three conservative equations of fluid as well as dealing with complex fluid structure interactions.

  9. Seasonally adjusted birth frequencies follow the Poisson distribution.

    PubMed

    Barra, Mathias; Lindstrøm, Jonas C; Adams, Samantha S; Augestad, Liv A

    2015-12-15

    Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p < 0.01). The fundamental model with year and month as explanatory variables is significantly improved (p < 0.001) by adding day of the week as an explanatory variable. Altogether 7.5% more children are born on Tuesdays than on Sundays. The digit sum of the date is non-significant as an explanatory variable (p = 0.23), nor does it increase the explained variance. INERPRETATION: Spontaneous births are well modelled by a time-dependent Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends.

  10. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  11. [A simple model for describing pressure-volume curves in free balloon dilatation with reference the dynamics of inflation hydraulic aspects].

    PubMed

    Bloss, P; Werner, C

    2000-06-01

    We propose a simple model to describe pressure-time and pressure-volume curves for the free balloon (balloon in air) of balloon catheters, taking into account the dynamics of the inflation device. On the basis of our investigations of the flow rate-dependence of characteristic parameters of the pressure-time curves, the appropriateness of this simple model is demonstrated using a representative example. Basic considerations lead to the following assumptions: (1) the flow within the shaft of the catheter is laminar, and (ii) the volume decrease of the liquid used for inflation due to pressurization can be neglected if the liquid is carefully degassed prior to inflation, and if the total volume of the liquid in the system is less than 2 ml. Taking into account the dynamics of the inflation device used for pumping the liquid into the proximal end of the shaft during inflation, the inflation process can be subdivided into the following three phases: initial phase, filling phase and dilatation phase. For these three phases, the transformation of the time into the volume coordinates is given. On the basis of our model, the following parameters of the balloon catheter can be determined from a measured pressure-time curve: (1) the resistance to flow of the liquid through the shaft of the catheter and the resulting pressure drop across the shaft, (2) the residual volume and residual pressure of the balloon, and (3) the volume compliance of the balloon catheter with and without the inflation device.

  12. Modeling spiking behavior of neurons with time-dependent Poisson processes.

    PubMed

    Shinomoto, S; Tsubo, Y

    2001-10-01

    Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.

  13. A Bayesian destructive weighted Poisson cure rate model and an application to a cutaneous melanoma data.

    PubMed

    Rodrigues, Josemar; Cancho, Vicente G; de Castro, Mário; Balakrishnan, N

    2012-12-01

    In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis--latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.

  14. Supergravity contributions to inflation in models with non-minimal coupling to gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Kumar; Dutta, Koushik; Domcke, Valerie, E-mail: kumar.das@saha.ac.in, E-mail: valerie.domcke@apc.univ-paris7.fr, E-mail: koushik.dutta@saha.ac.in

    2017-03-01

    This paper provides a systematic study of supergravity contributions relevant for inflationary model building in Jordan frame supergravity. In this framework, canonical kinetic terms in the Jordan frame result in the separation of the Jordan frame scalar potential into a tree-level term and a supergravity contribution which is potentially dangerous for sustaining inflation. We show that if the vacuum energy necessary for driving inflation originates dominantly from the F-term of an auxiliary field (i.e. not the inflaton), the supergravity corrections to the Jordan frame scalar potential are generically suppressed. Moreover, these supergravity contributions identically vanish if the superpotential vanishes alongmore » the inflationary trajectory. On the other hand, if the F-term associated with the inflaton dominates the vacuum energy, the supergravity contributions are generically comparable to the globally supersymmetric contributions. In addition, the non-minimal coupling to gravity inherent to Jordan frame supergravity significantly impacts the inflationary model depending on the size and sign of this coupling. We discuss the phenomenology of some representative inflationary models, and point out the relation to the recently much discussed cosmological 'attractor' models.« less

  15. Clustered mixed nonhomogeneous Poisson process spline models for the analysis of recurrent event panel data.

    PubMed

    Nielsen, J D; Dean, C B

    2008-09-01

    A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.

  16. Warm natural inflation

    NASA Astrophysics Data System (ADS)

    Mishra, Hiranmaya; Mohanty, Subhendra; Nautiyal, Akhilesh

    2012-04-01

    In warm inflation models there is the requirement of generating large dissipative couplings of the inflaton with radiation, while at the same time, not de-stabilising the flatness of the inflaton potential due to radiative corrections. One way to achieve this without fine tuning unrelated couplings is by supersymmetry. In this Letter we show that if the inflaton and other light fields are pseudo-Nambu-Goldstone bosons then the radiative corrections to the potential are suppressed and the thermal corrections are small as long as the temperature is below the symmetry breaking scale. In such models it is possible to fulfil the contrary requirements of an inflaton potential which is stable under radiative corrections and the generation of a large dissipative coupling of the inflaton field with other light fields. We construct a warm inflation model which gives the observed CMB-anisotropy amplitude and spectral index where the symmetry breaking is at the GUT scale.

  17. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  18. What is the cause of confidence inflation in the Life Events Inventory (LEI) paradigm?

    PubMed

    Von Glahn, Nicholas R; Otani, Hajime; Migita, Mai; Langford, Sara J; Hillard, Erin E

    2012-01-01

    Briefly imagining, paraphrasing, or explaining an event causes people to increase their confidence that this event occurred during childhood-the imagination inflation effect. The mechanisms responsible for the effect were investigated with a new paradigm. In Experiment 1, event familiarity (defined as processing fluency) was varied by asking participants to rate each event once, three times, or five times. No inflation was found, indicating that familiarity does not account for the effect. In Experiment 2, richness of memory representation was manipulated by asking participants to generate zero, three, or six details. Confidence increased from the initial to the final rating in the three- and six-detail conditions, indicating that the effect is based on reality-monitoring errors. However, greater inflation in the three-detail condition than in the six-detail condition indicated that there is a boundary condition. These results were also consistent with an alternative hypothesis, the mental workload hypothesis.

  19. Large Occurrence Patterns of New Zealand Deep Earthquakes: Characterization by Use of a Switching Poisson Model

    NASA Astrophysics Data System (ADS)

    Shaochuan, Lu; Vere-Jones, David

    2011-10-01

    The paper studies the statistical properties of deep earthquakes around North Island, New Zealand. We first evaluate the catalogue coverage and completeness of deep events according to cusum (cumulative sum) statistics and earlier literature. The epicentral, depth, and magnitude distributions of deep earthquakes are then discussed. It is worth noting that strong grouping effects are observed in the epicentral distribution of these deep earthquakes. Also, although the spatial distribution of deep earthquakes does not change, their occurrence frequencies vary from time to time, active in one period, relatively quiescent in another. The depth distribution of deep earthquakes also hardly changes except for events with focal depth less than 100 km. On the basis of spatial concentration we partition deep earthquakes into several groups—the Taupo-Bay of Plenty group, the Taranaki group, and the Cook Strait group. Second-order moment analysis via the two-point correlation function reveals only very small-scale clustering of deep earthquakes, presumably limited to some hot spots only. We also suggest that some models usually used for shallow earthquakes fit deep earthquakes unsatisfactorily. Instead, we propose a switching Poisson model for the occurrence patterns of deep earthquakes. The goodness-of-fit test suggests that the time-varying activity is well characterized by a switching Poisson model. Furthermore, detailed analysis carried out on each deep group by use of switching Poisson models reveals similar time-varying behavior in occurrence frequencies in each group.

  20. Robustness of inflation to inhomogeneous initial conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clough, Katy; Lim, Eugene A.; DiNunno, Brandon S.

    We consider the effects of inhomogeneous initial conditions in both the scalar field profile and the extrinsic curvature on different inflationary models. In particular, we compare the robustness of small field inflation to that of large field inflation, using numerical simulations with Einstein gravity in 3+1 dimensions. We find that small field inflation can fail in the presence of subdominant gradient energies, suggesting that it is much less robust to inhomogeneities than large field inflation, which withstands dominant gradient energies. However, we also show that small field inflation can be successful even if some regions of spacetime start out inmore » the region of the potential that does not support inflation. In the large field case, we confirm previous results that inflation is robust if the inflaton occupies the inflationary part of the potential. Furthermore, we show that increasing initial scalar gradients will not form sufficiently massive inflation-ending black holes if the initial hypersurface is approximately flat. Finally, we consider the large field case with a varying extrinsic curvature K , such that some regions are initially collapsing. We find that this may again lead to local black holes, but overall the spacetime remains inflationary if the spacetime is open, which confirms previous theoretical studies.« less

  1. Robustness of inflation to inhomogeneous initial conditions

    NASA Astrophysics Data System (ADS)

    Clough, Katy; Lim, Eugene A.; DiNunno, Brandon S.; Fischler, Willy; Flauger, Raphael; Paban, Sonia

    2017-09-01

    We consider the effects of inhomogeneous initial conditions in both the scalar field profile and the extrinsic curvature on different inflationary models. In particular, we compare the robustness of small field inflation to that of large field inflation, using numerical simulations with Einstein gravity in 3+1 dimensions. We find that small field inflation can fail in the presence of subdominant gradient energies, suggesting that it is much less robust to inhomogeneities than large field inflation, which withstands dominant gradient energies. However, we also show that small field inflation can be successful even if some regions of spacetime start out in the region of the potential that does not support inflation. In the large field case, we confirm previous results that inflation is robust if the inflaton occupies the inflationary part of the potential. Furthermore, we show that increasing initial scalar gradients will not form sufficiently massive inflation-ending black holes if the initial hypersurface is approximately flat. Finally, we consider the large field case with a varying extrinsic curvature K, such that some regions are initially collapsing. We find that this may again lead to local black holes, but overall the spacetime remains inflationary if the spacetime is open, which confirms previous theoretical studies.

  2. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    PubMed

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation

  3. Climatology of Station Storm Rainfall in the Continental United States: Parameters of the Bartlett-Lewis and Poisson Rectangular Pulses Models

    NASA Technical Reports Server (NTRS)

    Hawk, Kelly Lynn; Eagleson, Peter S.

    1992-01-01

    The parameters of two stochastic models of point rainfall, the Bartlett-Lewis model and the Poisson rectangular pulses model, are estimated for each month of the year from the historical records of hourly precipitation at more than seventy first-order stations in the continental United States. The parameters are presented both in tabular form and as isopleths on maps. The Poisson rectangular pulses parameters are useful in implementing models of the land surface water balance. The Bartlett-Lewis parameters are useful in disaggregating precipitation to a time period shorter than that of existing observations. Information is also included on a floppy disk.

  4. Educational Aspirations: Markov and Poisson Models. Rural Industrial Development Project Working Paper Number 14, August 1971.

    ERIC Educational Resources Information Center

    Kayser, Brian D.

    The fit of educational aspirations of Illinois rural high school youths to 3 related one-parameter mathematical models was investigated. The models used were the continuous-time Markov chain model, the discrete-time Markov chain, and the Poisson distribution. The sample of 635 students responded to questionnaires from 1966 to 1969 as part of an…

  5. Seeing Double with K2: Testing Re-inflation with Two Remarkably Similar Planets around Red Giant Branch Stars

    NASA Astrophysics Data System (ADS)

    Grunblatt, Samuel K.; Huber, Daniel; Gaidos, Eric; Lopez, Eric D.; Howard, Andrew W.; Isaacson, Howard T.; Sinukoff, Evan; Vanderburg, Andrew; Nofi, Larissa; Yu, Jie; North, Thomas S. H.; Chaplin, William; Foreman-Mackey, Daniel; Petigura, Erik; Ansdell, Megan; Weiss, Lauren; Fulton, Benjamin; Lin, Douglas N. C.

    2017-12-01

    Despite more than 20 years since the discovery of the first gas giant planet with an anomalously large radius, the mechanism for planet inflation remains unknown. Here, we report the discovery of K2-132b, an inflated gas giant planet found with the NASA K2 Mission, and a revised mass for another inflated planet, K2-97b. These planets orbit on ≈9 day orbits around host stars that recently evolved into red giants. We constrain the irradiation history of these planets using models constrained by asteroseismology and Keck/High Resolution Echelle Spectrometer spectroscopy and radial velocity measurements. We measure planet radii of 1.31 ± 0.11 R J and 1.30 ± 0.07 R J, respectively. These radii are typical for planets receiving the current irradiation, but not the former, zero age main-sequence irradiation of these planets. This suggests that the current sizes of these planets are directly correlated to their current irradiation. Our precise constraints of the masses and radii of the stars and planets in these systems allow us to constrain the planetary heating efficiency of both systems as 0.03{ % }-0.02 % +0.03 % . These results are consistent with a planet re-inflation scenario, but suggest that the efficiency of planet re-inflation may be lower than previously theorized. Finally, we discuss the agreement within 10% of the stellar masses and radii, and the planet masses, radii, and orbital periods of both systems, and speculate that this may be due to selection bias in searching for planets around evolved stars.

  6. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    PubMed

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  7. Rapid roll inflation with conformal coupling

    NASA Astrophysics Data System (ADS)

    Kofman, Lev; Mukohyama, Shinji

    2008-02-01

    Usual inflation is realized with a slow rolling scalar field minimally coupled to gravity. In contrast, we consider dynamics of a scalar with a flat effective potential, conformally coupled to gravity. Surprisingly, it contains an attractor inflationary solution with the rapidly rolling inflaton field. We discuss models with the conformal inflaton with a flat potential (including hybrid inflation). There is no generation of cosmological fluctuations from the conformally coupled inflaton. We consider realizations of modulated (inhomogeneous reheating) or curvaton cosmological fluctuations in these models. We also implement these unusual features for the popular string-theoretic warped inflationary scenario, based on the interacting D3-D¯3 branes. The original warped brane inflation suffers a large inflaton mass due to conformal coupling to 4-dimensional gravity. Instead of considering this as a problem and trying to cure it with extra engineering, we show that warped inflation with the conformally coupled, rapidly rolling inflaton is yet possible with N=37 efoldings, which requires low-energy scales 1 100 TeV of inflation. Coincidentally, the same warping numerology can be responsible for the hierarchy. It is shown that the scalars associated with angular isometries of the warped geometry of compact manifold (e.g. S3 of Klebanov-Strassler (KS) geometry) have solutions identical to conformally coupled modes and also cannot be responsible for cosmological fluctuations. We discuss other possibilities.

  8. Poisson Regression Analysis of Illness and Injury Surveillance Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences duemore » to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to

  9. Inflation in random Gaussian landscapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu

    2017-05-01

    We develop analytic and numerical techniques for studying the statistics of slow-roll inflation in random Gaussian landscapes. As an illustration of these techniques, we analyze small-field inflation in a one-dimensional landscape. We calculate the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations n {sub s} and its running α {sub s} . These distributions have a universal form, insensitive to the correlation function of the Gaussian ensemble. We outline possible extensions of our methods to a large number of fields and to models of large-field inflation. These methods do not suffer frommore » potential inconsistencies inherent in the Brownian motion technique, which has been used in most of the earlier treatments.« less

  10. London equation for monodromy inflation

    NASA Astrophysics Data System (ADS)

    Kaloper, Nemanja; Lawrence, Albion

    2017-03-01

    We focus on the massive gauge theory formulation of axion monodromy inflation. We argue that a gauge symmetry hidden in these models is the key mechanism protecting inflation from dangerous field theory and quantum gravity corrections. The effective theory of large-field inflation is dual to a massive U (1 ) 4-form gauge theory, which is similar to a massive gauge theory description of superconductivity. The gauge theory explicitly realizes the old Julia-Toulouse proposal for a low-energy description of a gauge theory in a defect condensate. While we work mostly with the example of quadratic axion potential induced by flux monodromy, we discuss how other types of potentials can arise from the inclusion of gauge-invariant corrections to the theory.

  11. Bispectrum from open inflation

    NASA Astrophysics Data System (ADS)

    Sugimura, Kazuyuki; Komatsu, Eiichiro

    2013-11-01

    We calculate the bispectrum of primordial curvature perturbations, ζ, generated during ``open inflation.'' Inflation occurs inside a bubble nucleated via quantum tunneling from the background false vacuum state. Our universe lives inside the bubble, which can be described as a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) universe with negative spatial curvature, undergoing slow-roll inflation. We pay special attention to the issue of an initial state for quantum fluctuations. A ``vacuum state'' defined by a positive-frequency mode in de Sitter space charted by open coordinates is different from the Euclidean vacuum (which is equivalent to the so-called ``Bunch-Davies vacuum'' defined by a positive-frequency mode in de Sitter space charted by flat coordinates). Quantum tunneling (bubble nucleation) then modifies the initial state away from the original Euclidean vacuum. While most of the previous study on modifications of the initial quantum state introduces, by hand, an initial time at which the quantum state is modified as well as the form of the modification, an effective initial time naturally emerges and the form is fixed by quantum tunneling in open inflation models. Therefore, open inflation enables a self-consistent computation of the effect of a modified initial state on the bispectrum. We find a term which goes as langleζk1ζk2ζk3ranglepropto1/k12k34 in the so-called squeezed configurations, k3 << k1 ≈ k2, in agreement with the previous study on modifications of the initial state. The bispectrum in the exact folded limit, e.g., k1 = k2+k3, is also enhanced and remains finite. However, these terms are exponentially suppressed when the wavelength of ζ is smaller than the curvature radius of the universe. The leading-order bispectrum is equal to the usual one from single-field slow-roll inflation; the terms specific for open inflation arise only in the sub-leading order when the wavelength of ζ is smaller than the curvature radius.

  12. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution.

    PubMed

    Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton

    2018-03-13

    The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.

  13. G-warm inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrera, Ramón, E-mail: ramon.herrera@pucv.cl

    A warm inflationary universe in the context of Galileon model or G-model is studied. Under a general formalism we study the inflationary dynamics and the cosmological perturbations considering a coupling of the form G (φ, X )= g (φ) X . As a concrete example, we consider an exponential potential together with the cases in which the dissipation and Galilean coefficients are constants. Also, we study the weak regime given by the condition R <1+3 gH φ-dot , and the strong regime in which 1< R +3 gH φ-dot . Additionally, we obtain constraints on the parameters during the evolutionmore » of G-warm inflation, assuming the condition for warm inflation in which the temperature T > H , the conditions or the weak and strong regimes, together with the consistency relation r = r ( n {sub s} ) from Planck data.« less

  14. Distinguishing between extra natural inflation and natural inflation after BICEP2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohri, Kazunori; Lim, C.S.; Lin, Chia-Min, E-mail: kohri@post.kek.jp, E-mail: lim@lab.twcu.ac.jp, E-mail: lin@chuo-u.ac.jp

    2014-08-01

    In this paper, we carefully calculated the tensor-to-scalar ratio, the running spectral index, and the running of running spectrum for (extra) natural inflation in order to compare with recent BICEP2 data, PLANCK satellite data and future 21 cm data. We discovered that the prediction for running spectral index and the running of running spectrum in natural inflation is different from that in the case of extra natural inflation. Near future observation for the running spectral index can only provide marginal accuracy which may not allow us distinguishing between extra natural inflation from natural inflation clearly unless the experimental accuracy canmore » be further improved.« less

  15. Dynamic Characterization of an Inflatable Concentrator for Solar Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Leigh, Larry; Hamidzadeh, Hamid; Tinker, Michael L.; Rodriguez, Pedro I. (Technical Monitor)

    2001-01-01

    An inflatable structural system that is a technology demonstrator for solar thermal propulsion and other applications is characterized for structural dynamic behavior both experimentally and computationally. The inflatable structure is a pressurized assembly developed for use in orbit to support a Fresnel lens or inflatable lenticular element for focusing sunlight into a solar thermal rocket engine. When the engine temperature reaches a pre-set level, the propellant is injected into the engine, absorbs heat from an exchanger, and is expanded through the nozzle to produce thrust. The inflatable structure is a passively adaptive system in that a regulator and relief valve are utilized to maintain pressure within design limits during the full range of orbital conditions. Modeling and test activities are complicated by the fact that the polyimide film material used for construction of the inflatable is nonlinear, with modulus varying as a function of frequency, temperature, and level of excitation. Modal vibration testing and finite element modeling are described in detail in this paper. The test database is used for validation and modification of the model. This work is highly significant because of the current interest in inflatable structures for space application, and because of the difficulty in accurately modeling such systems.

  16. Towards an explicit model of large field inflation

    NASA Astrophysics Data System (ADS)

    Dorronsoro, Juan Diaz; Schillo, Marjorie

    2018-05-01

    The unwinding inflation mechanism is studied in a type IIB flux compactification where all moduli are stabilized using flux, non-perturbative effects, and the leading α' corrections of the large volume scenario. We consider the backreaction on the geometry due to the presence of anti-D3 branes as well as the backreaction of inflation on the Kähler moduli, and compute the resulting corrections to the slow-roll potential. By taking large flux numbers, we are able to find inflationary epochs where backreaction effects are under control, the inflaton traverses a super-Planckian field range, and the resulting amplitude of scalar perturbations is consistent with observation.

  17. The Other Inflation

    ERIC Educational Resources Information Center

    Aristides

    1976-01-01

    The other inflation is grade inflation, the label affixed to the indisputable rise in the grade-point averages of undergraduates at public and private, elite and community colleges and universities across the country. The effects of grade inflation upon academic performance were assessed. (Author/RK)

  18. First observational tests of eternal inflation.

    PubMed

    Feeney, Stephen M; Johnson, Matthew C; Mortlock, Daniel J; Peiris, Hiranya V

    2011-08-12

    The eternal inflation scenario predicts that our observable Universe resides inside a single bubble embedded in a vast inflating multiverse. We present the first observational tests of eternal inflation, performing a search for cosmological signatures of collisions with other bubble universes in cosmic microwave background data from the WMAP satellite. We conclude that the WMAP 7-year data do not warrant augmenting the cold dark matter model with a cosmological constant with bubble collisions, constraining the average number of detectable bubble collisions on the full sky N(s) < 1.6 at 68% C.L. Data from the Planck satellite can be used to more definitively test the bubble-collision hypothesis.

  19. Inflatable Vessel and Method

    NASA Technical Reports Server (NTRS)

    Raboin, Jasen L. (Inventor); Valle, Gerard D. (Inventor); Edeen, Gregg A. (Inventor); delaFuente, Horacio M. (Inventor); Schneider, William C. (Inventor); Spexarth, Gary R. (Inventor); Pandya, Shalini Gupta (Inventor); Johnson, Christopher J. (Inventor)

    2003-01-01

    An inflatable module comprising a structural core and an inflatable shell, wherein the inflatable shell is sealingly attached to the structural core. In its launch or pre-deployed configuration, the wall thickness of the inflatable shell is collapsed by vacuum. Also in this configuration, the inflatable shell is collapsed and efficiently folded around the structural core. Upon deployment, the wall thickness of the inflatable shell is inflated; whereby the inflatable shell itself, is thereby inflated around the structural core, defining therein a large enclosed volume. A plurality of removable shelves are arranged interior to the structural core in the launch configuration. The structural core also includes at least one longeron that, in conjunction with the shelves, primarily constitute the rigid, strong, and lightweight load-bearing structure of the module during launch. The removable shelves are detachable from their arrangement in the launch configuration so that, when the module is in its deployed configuration and launch loads no longer exist, the shelves can be rearranged to provide a module interior arrangement suitable for human habitation and work. In the preferred embodiment, to provide efficiency in structural load paths and attachments, the shape of the inflatable shell is a cylinder with semi-toroidal ends.

  20. Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.

    PubMed

    Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata

    2008-09-01

    A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.