A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.
Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat
2018-01-09
In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.
Modeling health survey data with excessive zero and K responses.
Lin, Ting Hsiang; Tsai, Min-Hsiao
2013-04-30
Zero-inflated Poisson regression is a popular tool used to analyze data with excessive zeros. Although much work has already been performed to fit zero-inflated data, most models heavily depend on special features of the individual data. To be specific, this means that there is a sizable group of respondents who endorse the same answers making the data have peaks. In this paper, we propose a new model with the flexibility to model excessive counts other than zero, and the model is a mixture of multinomial logistic and Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts, including zeros, K (where K is a positive integer) and all other values. The Poisson regression component models the counts that are assumed to follow a Poisson distribution. Two examples are provided to illustrate our models when the data have counts containing many ones and sixes. As a result, the zero-inflated and K-inflated models exhibit a better fit than the zero-inflated Poisson and standard Poisson regressions. Copyright © 2012 John Wiley & Sons, Ltd.
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.
Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza
2016-01-01
Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.
Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach
Mohammadi, Tayeb; Sedehi, Morteza
2016-01-01
Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493
Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.
He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L
2015-10-01
Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.
Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto
2011-01-01
Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960
Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions
ERIC Educational Resources Information Center
Desjardins, Christopher David
2016-01-01
The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…
IRT-ZIP Modeling for Multivariate Zero-Inflated Count Data
ERIC Educational Resources Information Center
Wang, Lijuan
2010-01-01
This study introduces an item response theory-zero-inflated Poisson (IRT-ZIP) model to investigate psychometric properties of multiple items and predict individuals' latent trait scores for multivariate zero-inflated count data. In the model, two link functions are used to capture two processes of the zero-inflated count data. Item parameters are…
Marginalized zero-inflated Poisson models with missing covariates.
Benecha, Habtamu K; Preisser, John S; Divaris, Kimon; Herring, Amy H; Das, Kalyan
2018-05-11
Unlike zero-inflated Poisson regression, marginalized zero-inflated Poisson (MZIP) models for counts with excess zeros provide estimates with direct interpretations for the overall effects of covariates on the marginal mean. In the presence of missing covariates, MZIP and many other count data models are ordinarily fitted using complete case analysis methods due to lack of appropriate statistical methods and software. This article presents an estimation method for MZIP models with missing covariates. The method, which is applicable to other missing data problems, is illustrated and compared with complete case analysis by using simulations and dental data on the caries preventive effects of a school-based fluoride mouthrinse program. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zero adjusted models with applications to analysing helminths count data.
Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N
2014-11-27
It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Aguero-Valverde, Jonathan
2013-01-01
In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modeling number of claims and prediction of total claim amount
NASA Astrophysics Data System (ADS)
Acar, Aslıhan Şentürk; Karabey, Uǧur
2017-07-01
In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.
A review on models for count data with extra zeros
NASA Astrophysics Data System (ADS)
Zamri, Nik Sarah Nik; Zamzuri, Zamira Hasanah
2017-04-01
Typically, the zero inflated models are usually used in modelling count data with excess zeros. The existence of the extra zeros could be structural zeros or random which occur by chance. These types of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences. As found in the literature, the most popular zero inflated models used are zero inflated Poisson and zero inflated negative binomial. Recently, more complex models have been developed to account for overdispersion and unobserved heterogeneity. In addition, more extended distributions are also considered in modelling data with this feature. In this paper, we review related literature, provide a recent development and summary on models for count data with extra zeros.
Lee, J-H; Han, G; Fulp, W J; Giuliano, A R
2012-06-01
The Poisson model can be applied to the count of events occurring within a specific time period. The main feature of the Poisson model is the assumption that the mean and variance of the count data are equal. However, this equal mean-variance relationship rarely occurs in observational data. In most cases, the observed variance is larger than the assumed variance, which is called overdispersion. Further, when the observed data involve excessive zero counts, the problem of overdispersion results in underestimating the variance of the estimated parameter, and thus produces a misleading conclusion. We illustrated the use of four models for overdispersed count data that may be attributed to excessive zeros. These are Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial models. The example data in this article deal with the number of incidents involving human papillomavirus infection. The four models resulted in differing statistical inferences. The Poisson model, which is widely used in epidemiology research, underestimated the standard errors and overstated the significance of some covariates.
Marginalized zero-inflated negative binomial regression with application to dental caries
Preisser, John S.; Das, Kalyan; Long, D. Leann; Divaris, Kimon
2015-01-01
The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared to marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034
Growth Curve Models for Zero-Inflated Count Data: An Application to Smoking Behavior
ERIC Educational Resources Information Center
Liu, Hui; Powers, Daniel A.
2007-01-01
This article applies growth curve models to longitudinal count data characterized by an excess of zero counts. We discuss a zero-inflated Poisson regression model for longitudinal data in which the impact of covariates on the initial counts and the rate of change in counts over time is the focus of inference. Basic growth curve models using a…
Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2014-11-30
We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap
2016-01-01
The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.
Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E
2018-02-01
In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example.
Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel
2008-01-01
Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M
2015-10-30
Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.
Hurdle models for multilevel zero-inflated data via h-likelihood.
Molas, Marek; Lesaffre, Emmanuel
2010-12-30
Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.
Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S
2016-12-01
Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni
2017-12-01
Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.
Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.
Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai
2011-01-01
Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.
Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.
2012-01-01
Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.
Mallick, Himel; Tiwari, Hemant K.
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062
Mallick, Himel; Tiwari, Hemant K
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.
Zero-state Markov switching count-data models: an empirical assessment.
Malyshkina, Nataliya V; Mannering, Fred L
2010-01-01
In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.
Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming
2011-01-01
Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207
Song, X X; Zhao, Q; Tao, T; Zhou, C M; Diwan, V K; Xu, B
2018-05-30
Records of absenteeism from primary schools are valuable data for infectious diseases surveillance. However, the analysis of the absenteeism is complicated by the data features of clustering at zero, non-independence and overdispersion. This study aimed to generate an appropriate model to handle the absenteeism data collected in a European Commission granted project for infectious disease surveillance in rural China and to evaluate the validity and timeliness of the resulting model for early warnings of infectious disease outbreak. Four steps were taken: (1) building a 'well-fitting' model by the zero-inflated Poisson model with random effects (ZIP-RE) using the absenteeism data from the first implementation year; (2) applying the resulting model to predict the 'expected' number of absenteeism events in the second implementation year; (3) computing the differences between the observations and the expected values (O-E values) to generate an alternative series of data; (4) evaluating the early warning validity and timeliness of the observational data and model-based O-E values via the EARS-3C algorithms with regard to the detection of real cluster events. The results indicate that ZIP-RE and its corresponding O-E values could improve the detection of aberrations, reduce the false-positive signals and are applicable to the zero-inflated data.
Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne
2017-02-28
This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wan, Wai-Yin; Chan, Jennifer S K
2009-08-01
For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).
Xie, Haiyi; Tao, Jill; McHugo, Gregory J; Drake, Robert E
2013-07-01
Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit. Copyright © 2013 Elsevier Inc. All rights reserved.
Stamm, John W.; Long, D. Leann; Kincade, Megan E.
2012-01-01
Over the past five to ten years, zero-inflated count regression models have been increasingly applied to the analysis of dental caries indices (e.g., DMFT, dfms, etc). The main reason for that is linked to the broad decline in children’s caries experience, such that dmf and DMF indices more frequently generate low or even zero counts. This article specifically reviews the application of zero-inflated Poisson and zero-inflated negative binomial regression models to dental caries, with emphasis on the description of the models and the interpretation of fitted model results given the study goals. The review finds that interpretations provided in the published caries research are often imprecise or inadvertently misleading, particularly with respect to failing to discriminate between inference for the class of susceptible persons defined by such models and inference for the sampled population in terms of overall exposure effects. Recommendations are provided to enhance the use as well as the interpretation and reporting of results of count regression models when applied to epidemiological studies of dental caries. PMID:22710271
A new multivariate zero-adjusted Poisson model with applications to biomedicine.
Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen
2018-05-25
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Variable selection for distribution-free models for longitudinal zero-inflated count responses.
Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M
2016-07-20
Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Analyzing hospitalization data: potential limitations of Poisson regression.
Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R
2015-08-01
Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath
2016-06-01
Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. © 2015, The International Biometric Society.
Structural zeroes and zero-inflated models.
He, Hua; Tang, Wan; Wang, Wenjuan; Crits-Christoph, Paul
2014-08-01
In psychosocial and behavioral studies count outcomes recording the frequencies of the occurrence of some health or behavior outcomes (such as the number of unprotected sexual behaviors during a period of time) often contain a preponderance of zeroes because of the presence of 'structural zeroes' that occur when some subjects are not at risk for the behavior of interest. Unlike random zeroes (responses that can be greater than zero, but are zero due to sampling variability), structural zeroes are usually very different, both statistically and clinically. False interpretations of results and study findings may result if differences in the two types of zeroes are ignored. However, in practice, the status of the structural zeroes is often not observed and this latent nature complicates the data analysis. In this article, we focus on one model, the zero-inflated Poisson (ZIP) regression model that is commonly used to address zero-inflated data. We first give a brief overview of the issues of structural zeroes and the ZIP model. We then given an illustration of ZIP with data from a study on HIV-risk sexual behaviors among adolescent girls. Sample codes in SAS and Stata are also included to help perform and explain ZIP analyses.
Lord, Dominique; Washington, Simon P; Ivan, John N
2005-01-01
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.
Zero-Inflated Poisson Modeling of Fall Risk Factors in Community-Dwelling Older Adults.
Jung, Dukyoo; Kang, Younhee; Kim, Mi Young; Ma, Rye-Won; Bhandari, Pratibha
2016-02-01
The aim of this study was to identify risk factors for falls among community-dwelling older adults. The study used a cross-sectional descriptive design. Self-report questionnaires were used to collect data from 658 community-dwelling older adults and were analyzed using logistic and zero-inflated Poisson (ZIP) regression. Perceived health status was a significant factor in the count model, and fall efficacy emerged as a significant predictor in the logistic models. The findings suggest that fall efficacy is important for predicting not only faller and nonfaller status but also fall counts in older adults who may or may not have experienced a previous fall. The fall predictors identified in this study--perceived health status and fall efficacy--indicate the need for fall-prevention programs tailored to address both the physical and psychological issues unique to older adults. © The Author(s) 2014.
A quantile count model of water depth constraints on Cape Sable seaside sparrows
Cade, B.S.; Dong, Q.
2008-01-01
1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.
Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates
Gray, B.R.
2005-01-01
The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively). However, the zero-modified Poisson models underestimated small counts (1 ??? y ??? 4) and overestimated intermediate counts (7 ??? y ??? 23). Counts greater than zero were estimated well by zero-modified negative binomial models, while counts greater than one were also estimated well by the standard negative binomial model. Based on AIC and percent zero estimation criteria, the two-stage and zero-inflated models performed similarly. The above inferences were largely confirmed when the models were used to predict values from a separate, evaluation data set (n = 110). An exception was that, using the evaluation data set, the standard negative binomial model appeared superior to its zero-modified counterparts using the AIC (but not percent zero criteria). This and other evidence suggest that a negative binomial distributional assumption should be routinely considered when modelling benthic macroinvertebrate data from low flow environments. Whether negative binomial models should themselves be routinely examined for extra zeroes requires, from a statistical perspective, more investigation. However, this question may best be answered by ecological arguments that may be specific to the sampled species and locations. ?? 2004 Elsevier B.V. All rights reserved.
Disease Mapping of Zero-excessive Mesothelioma Data in Flanders
Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel
2016-01-01
Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590
Disease mapping of zero-excessive mesothelioma data in Flanders.
Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel
2017-01-01
To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.
Classifying next-generation sequencing data using a zero-inflated Poisson model.
Zhou, Yan; Wan, Xiang; Zhang, Baoxue; Tong, Tiejun
2018-04-15
With the development of high-throughput techniques, RNA-sequencing (RNA-seq) is becoming increasingly popular as an alternative for gene expression analysis, such as RNAs profiling and classification. Identifying which type of diseases a new patient belongs to with RNA-seq data has been recognized as a vital problem in medical research. As RNA-seq data are discrete, statistical methods developed for classifying microarray data cannot be readily applied for RNA-seq data classification. Witten proposed a Poisson linear discriminant analysis (PLDA) to classify the RNA-seq data in 2011. Note, however, that the count datasets are frequently characterized by excess zeros in real RNA-seq or microRNA sequence data (i.e. when the sequence depth is not enough or small RNAs with the length of 18-30 nucleotides). Therefore, it is desired to develop a new model to analyze RNA-seq data with an excess of zeros. In this paper, we propose a Zero-Inflated Poisson Logistic Discriminant Analysis (ZIPLDA) for RNA-seq data with an excess of zeros. The new method assumes that the data are from a mixture of two distributions: one is a point mass at zero, and the other follows a Poisson distribution. We then consider a logistic relation between the probability of observing zeros and the mean of the genes and the sequencing depth in the model. Simulation studies show that the proposed method performs better than, or at least as well as, the existing methods in a wide range of settings. Two real datasets including a breast cancer RNA-seq dataset and a microRNA-seq dataset are also analyzed, and they coincide with the simulation results that our proposed method outperforms the existing competitors. The software is available at http://www.math.hkbu.edu.hk/∼tongt. xwan@comp.hkbu.edu.hk or tongt@hkbu.edu.hk. Supplementary data are available at Bioinformatics online.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Khan, Asaduzzaman; Western, Mark
The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.
Bonate, Peter L; Sung, Crystal; Welch, Karen; Richards, Susan
2009-10-01
Patients that are exposed to biotechnology-derived therapeutics often develop antibodies to the therapeutic, the magnitude of which is assessed by measuring antibody titers. A statistical approach for analyzing antibody titer data conditional on seroconversion is presented. The proposed method is to first transform the antibody titer data based on a geometric series using a common ratio of 2 and a scale factor of 50 and then analyze the exponent using a zero-inflated or hurdle model assuming a Poisson or negative binomial distribution with random effects to account for patient heterogeneity. Patient specific covariates can be used to model the probability of developing an antibody response, i.e., seroconversion, as well as the magnitude of the antibody titer itself. The method was illustrated using antibody titer data from 87 male seroconverted Fabry patients receiving Fabrazyme. Titers from five clinical trials were collected over 276 weeks of therapy with anti-Fabrazyme IgG titers ranging from 100 to 409,600 after exclusion of seronegative patients. The best model to explain seroconversion was a zero-inflated Poisson (ZIP) model where cumulative dose (under a constant dose regimen of dosing every 2 weeks) influenced the probability of seroconversion. There was an 80% chance of seroconversion when the cumulative dose reached 210 mg (90% confidence interval: 194-226 mg). No difference in antibody titers was noted between Japanese or Western patients. Once seroconverted, antibody titers did not remain constant but decreased in an exponential manner from an initial magnitude to a new lower steady-state value. The expected titer after the new steady-state titer had been achieved was 870 (90% CI: 630-1109). The half-life to the new steady-state value after seroconversion was 44 weeks (90% CI: 17-70 weeks). Time to seroconversion did not appear to be correlated with titer at the time of seroconversion. The method can be adequately used to model antibody titer data.
Some findings on zero-inflated and hurdle poisson models for disease mapping.
Corpas-Burgos, Francisca; García-Donato, Gonzalo; Martinez-Beneito, Miguel A
2018-05-27
Zero excess in the study of geographically referenced mortality data sets has been the focus of considerable attention in the literature, with zero-inflation being the most common procedure to handle this lack of fit. Although hurdle models have also been used in disease mapping studies, their use is more rare. We show in this paper that models using particular treatments of zero excesses are often required for achieving appropriate fits in regular mortality studies since, otherwise, geographical units with low expected counts are oversmoothed. However, as also shown, an indiscriminate treatment of zero excess may be unnecessary and has a problematic implementation. In this regard, we find that naive zero-inflation and hurdle models, without an explicit modeling of the probabilities of zeroes, do not fix zero excesses problems well enough and are clearly unsatisfactory. Results sharply suggest the need for an explicit modeling of the probabilities that should vary across areal units. Unfortunately, these more flexible modeling strategies can easily lead to improper posterior distributions as we prove in several theoretical results. Those procedures have been repeatedly used in the disease mapping literature, and one should bear these issues in mind in order to propose valid models. We finally propose several valid modeling alternatives according to the results mentioned that are suitable for fitting zero excesses. We show that those proposals fix zero excesses problems and correct the mentioned oversmoothing of risks in low populated units depicting geographic patterns more suited to the data. Copyright © 2018 John Wiley & Sons, Ltd.
Analyzing Propensity Matched Zero-Inflated Count Outcomes in Observational Studies
DeSantis, Stacia M.; Lazaridis, Christos; Ji, Shuang; Spinale, Francis G.
2013-01-01
Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching, and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set. PMID:24298197
Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert
2014-11-10
Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is studied in the second example. Sub-models, which result from omitting zero-inflation and/or overdispersion features, are also considered for comparison's purpose. Analysis of the two datasets showed that accounting for the correlation, overdispersion, and excess zeros simultaneously resulted in a better fit to the data and, more importantly, that omission of any of them leads to incorrect marginal inference and erroneous conclusions about covariate effects. Copyright © 2014 John Wiley & Sons, Ltd.
Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.
Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A
2017-03-01
Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.
Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.
2014-01-01
Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are becoming increasingly popular for group size modeling. Choosing appropriate statistical distributions for modeling flock size data is fundamental to accurately estimating population summaries, determining required survey effort, and assessing and propagating uncertainty through decision-making processes.
Zero-inflated spatio-temporal models for disease mapping.
Torabi, Mahmoud
2017-05-01
In this paper, our aim is to analyze geographical and temporal variability of disease incidence when spatio-temporal count data have excess zeros. To that end, we consider random effects in zero-inflated Poisson models to investigate geographical and temporal patterns of disease incidence. Spatio-temporal models that employ conditionally autoregressive smoothing across the spatial dimension and B-spline smoothing over the temporal dimension are proposed. The analysis of these complex models is computationally difficult from the frequentist perspective. On the other hand, the advent of the Markov chain Monte Carlo algorithm has made the Bayesian analysis of complex models computationally convenient. Recently developed data cloning method provides a frequentist approach to mixed models that is also computationally convenient. We propose to use data cloning, which yields to maximum likelihood estimation, to conduct frequentist analysis of zero-inflated spatio-temporal modeling of disease incidence. One of the advantages of the data cloning approach is that the prediction and corresponding standard errors (or prediction intervals) of smoothing disease incidence over space and time is easily obtained. We illustrate our approach using a real dataset of monthly children asthma visits to hospital in the province of Manitoba, Canada, during the period April 2006 to March 2010. Performance of our approach is also evaluated through a simulation study. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tobit analysis of vehicle accident rates on interstate highways.
Anastasopoulos, Panagiotis Ch; Tarko, Andrew P; Mannering, Fred L
2008-03-01
There has been an abundance of research that has used Poisson models and its variants (negative binomial and zero-inflated models) to improve our understanding of the factors that affect accident frequencies on roadway segments. This study explores the application of an alternate method, tobit regression, by viewing vehicle accident rates directly (instead of frequencies) as a continuous variable that is left-censored at zero. Using data from vehicle accidents on Indiana interstates, the estimation results show that many factors relating to pavement condition, roadway geometrics and traffic characteristics significantly affect vehicle accident rates.
ERIC Educational Resources Information Center
Connell, Arin M.; Dishion, Thomas J.; Deater-Deckard, Kirby
2006-01-01
This 4-year study of 698 young adolescents examined the covariates of early onset substance use from Grade 6 through Grade 9. The youth were randomly assigned to a family-centered Adolescent Transitions Program (ATP) condition. Variable-centered (zero-inflated Poisson growth model) and person-centered (latent growth mixture model) approaches were…
Determinants of The Grade A Embryos in Infertile Women; Zero-Inflated Regression Model.
Almasi-Hashiani, Amir; Ghaheri, Azadeh; Omani Samani, Reza
2017-10-01
In assisted reproductive technology, it is important to choose high quality embryos for embryo transfer. The aim of the present study was to determine the grade A embryo count and factors related to it in infertile women. This historical cohort study included 996 infertile women. The main outcome was the number of grade A embryos. Zero-Inflated Poisson (ZIP) regression and Zero-Inflated Negative Binomial (ZINB) regression were used to model the count data as it contained excessive zeros. Stata software, version 13 (Stata Corp, College Station, TX, USA) was used for all statistical analyses. After adjusting for potential confounders, results from the ZINB model show that for each unit increase in the number 2 pronuclear (2PN) zygotes, we get an increase of 1.45 times as incidence rate ratio (95% confidence interval (CI): 1.23-1.69, P=0.001) in the expected grade A embryo count number, and for each increase in the cleavage day we get a decrease 0.35 times (95% CI: 0.20-0.61, P=0.001) in expected grade A embryo count. There is a significant association between both the number of 2PN zygotes and cleavage day with the number of grade A embryos in both ZINB and ZIP regression models. The estimated coefficients are more plausible than values found in earlier studies using less relevant models. Copyright© by Royan Institute. All rights reserved.
Accident prediction model for public highway-rail grade crossings.
Lu, Pan; Tolliver, Denver
2016-05-01
Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Spatial Poisson Hurdle Model for Exploring Geographic Variation in Emergency Department Visits
Neelon, Brian; Ghosh, Pulak; Loebs, Patrick F.
2012-01-01
Summary We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina. PMID:23543242
Some considerations for excess zeroes in substance abuse research.
Bandyopadhyay, Dipankar; DeSantis, Stacia M; Korte, Jeffrey E; Brady, Kathleen T
2011-09-01
Count data collected in substance abuse research often come with an excess of "zeroes," which are typically handled using zero-inflated regression models. However, there is a need to consider the design aspects of those studies before using such a statistical model to ascertain the sources of zeroes. We sought to illustrate hurdle models as alternatives to zero-inflated models to validate a two-stage decision-making process in situations of "excess zeroes." We use data from a study of 45 cocaine-dependent subjects where the primary scientific question was to evaluate whether study participation influences drug-seeking behavior. The outcome, "the frequency (count) of cocaine use days per week," is bounded (ranging from 0 to 7). We fit and compare binomial, Poisson, negative binomial, and the hurdle version of these models to study the effect of gender, age, time, and study participation on cocaine use. The hurdle binomial model provides the best fit. Gender and time are not predictive of use. Higher odds of use versus no use are associated with age; however once use is experienced, odds of further use decrease with increase in age. Participation was associated with higher odds of no-cocaine use; once there is use, participation reduced the odds of further use. Age and study participation are significantly predictive of cocaine-use behavior. The two-stage decision process as modeled by a hurdle binomial model (appropriate for bounded count data with excess zeroes) provides interesting insights into the study of covariate effects on count responses of substance use, when all enrolled subjects are believed to be "at-risk" of use.
Arab, A.; Wildhaber, M.L.; Wikle, C.K.; Gentry, C.N.
2008-01-01
Fisheries studies often employ multiple gears that result in large percentages of zero values. We considered a zero-inflated Poisson (ZIP) model with random effects to address these excessive zeros. By employing a Bayesian ZIP model that simultaneously incorporates data from multiple gears to analyze data from the Missouri River, we were able to compare gears and make more year, segment, and macrohabitat comparisons than did the original data analysis. For channel catfish Ictalurus punctatus, our results rank (highest to lowest) the mean catch per unit area (CPUA) for gears (beach seine, benthic trawl, electrofishing, and drifting trammel net); years (1998 and 1997); macrohabitats (tributary mouth, connected secondary channel, nonconnected secondary channel, and bend); and river segment zones (channelized, inter-reservoir, and least-altered). For shovelnose sturgeon Scaphirhynchus platorynchus, the mean CPUA was significantly higher for benthic trawls and drifting trammel nets; 1998 and 1997; tributary mouths, bends, and connected secondary channels; and some channelized or least-altered inter-reservoir segments. One important advantage of our approach is the ability to reliably infer patterns of relative abundance by means of multiple gears without using gear efficiencies. ?? Copyright by the American Fisheries Society 2008.
Mai, H M; Irons, P C; Kabir, J; Thompson, P N
2013-09-01
Brucellosis and campylobacteriosis are economically important diseases affecting bovine reproductive efficiency in Nigeria. A questionnaire-based survey was conducted in 271 cattle herds in Adamawa, Kaduna and Kano states of northern Nigeria using multistage cluster sampling. Serum from 4745 mature animals was tested for Brucella antibodies using the Rose-Bengal plate test and positives were confirmed in series-testing protocol using competitive enzyme-linked immunosorbent assay. Preputial scrapings from 602 bulls were tested using culture and identification for Campylobacter fetus. For each disease, a herd was classified as positive if one or more animals tested positive. For each herd, information on potential managemental and environmental risk factors was collected through a questionnaire administered during an interview with the manager, owner or herdsman. Multiple logistic regression models were used to model the odds of herd infection for each disease. A zero-inflated Poisson model was used to model the count of Brucella-positive animals within herds, with the number tested as an exposure variable. The presence of small ruminants (sheep and/or goats) on the same farm, and buying-in of >3 new animals in the previous year or failure to practice quarantine were associated with increased odds of herd-level campylobacteriosis and brucellosis, as well as increased within-herd counts of Brucella-positive animals. In addition, high rainfall, initial acquisition of animals from markets, practice of gynaecological examination and failure to practice herd prophylactic measures were positively associated with the odds of C. fetus infection in the herd. Herd size of >15, pastoral management system and presence of handling facility on the farm were associated with increased odds, and gynaecological examination with reduced odds of herd-level Brucella seropositivity. Furthermore, the zero-inflated Poisson model showed that borrowing or sharing of bulls was associated with higher counts, and provision of mineral supplement with lower counts of Brucella-positive cattle within herds. Identification of risk factors for bovine campylobacteriosis and brucellosis can help to identify appropriate control measures, and the use of zero-inflated count model can provide more specific information on these risk factors. Copyright © 2013 Elsevier B.V. All rights reserved.
Gauran, Iris Ivy M; Park, Junyong; Lim, Johan; Park, DoHwan; Zylstra, John; Peterson, Thomas; Kann, Maricel; Spouge, John L
2017-09-22
In recent mutation studies, analyses based on protein domain positions are gaining popularity over gene-centric approaches since the latter have limitations in considering the functional context that the position of the mutation provides. This presents a large-scale simultaneous inference problem, with hundreds of hypothesis tests to consider at the same time. This article aims to select significant mutation counts while controlling a given level of Type I error via False Discovery Rate (FDR) procedures. One main assumption is that the mutation counts follow a zero-inflated model in order to account for the true zeros in the count model and the excess zeros. The class of models considered is the Zero-inflated Generalized Poisson (ZIGP) distribution. Furthermore, we assumed that there exists a cut-off value such that smaller counts than this value are generated from the null distribution. We present several data-dependent methods to determine the cut-off value. We also consider a two-stage procedure based on screening process so that the number of mutations exceeding a certain value should be considered as significant mutations. Simulated and protein domain data sets are used to illustrate this procedure in estimation of the empirical null using a mixture of discrete distributions. Overall, while maintaining control of the FDR, the proposed two-stage testing procedure has superior empirical power. 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Hosseinpour, Mehdi; Yahaya, Ahmad Shukri; Sadullah, Ahmad Farhan
2014-01-01
Head-on crashes are among the most severe collision types and of great concern to road safety authorities. Therefore, it justifies more efforts to reduce both the frequency and severity of this collision type. To this end, it is necessary to first identify factors associating with the crash occurrence. This can be done by developing crash prediction models that relate crash outcomes to a set of contributing factors. This study intends to identify the factors affecting both the frequency and severity of head-on crashes that occurred on 448 segments of five federal roads in Malaysia. Data on road characteristics and crash history were collected on the study segments during a 4-year period between 2007 and 2010. The frequency of head-on crashes were fitted by developing and comparing seven count-data models including Poisson, standard negative binomial (NB), random-effect negative binomial, hurdle Poisson, hurdle negative binomial, zero-inflated Poisson, and zero-inflated negative binomial models. To model crash severity, a random-effect generalized ordered probit model (REGOPM) was used given a head-on crash had occurred. With respect to the crash frequency, the random-effect negative binomial (RENB) model was found to outperform the other models according to goodness of fit measures. Based on the results of the model, the variables horizontal curvature, terrain type, heavy-vehicle traffic, and access points were found to be positively related to the frequency of head-on crashes, while posted speed limit and shoulder width decreased the crash frequency. With regard to the crash severity, the results of REGOPM showed that horizontal curvature, paved shoulder width, terrain type, and side friction were associated with more severe crashes, whereas land use, access points, and presence of median reduced the probability of severe crashes. Based on the results of this study, some potential countermeasures were proposed to minimize the risk of head-on crashes. Copyright © 2013 Elsevier Ltd. All rights reserved.
A tutorial on count regression and zero-altered count models for longitudinal substance use data
Atkins, David C.; Baldwin, Scott A.; Zheng, Cheng; Gallop, Robert J.; Neighbors, Clayton
2012-01-01
Critical research questions in the study of addictive behaviors concern how these behaviors change over time - either as the result of intervention or in naturalistic settings. The combination of count outcomes that are often strongly skewed with many zeroes (e.g., days using, number of total drinks, number of drinking consequences) with repeated assessments (e.g., longitudinal follow-up after intervention or daily diary data) present challenges for data analyses. The current article provides a tutorial on methods for analyzing longitudinal substance use data, focusing on Poisson, zero-inflated, and hurdle mixed models, which are types of hierarchical or multilevel models. Two example datasets are used throughout, focusing on drinking-related consequences following an intervention and daily drinking over the past 30 days, respectively. Both datasets as well as R, SAS, Mplus, Stata, and SPSS code showing how to fit the models are available on a supplemental website. PMID:22905895
Xu, Qin; Zhang, Wei; Zhang, Tianyi; Zhang, Ruijie; Zhao, Yanfang; Zhang, Yuan; Guo, Yibin; Wang, Rui; Ma, Xiuqiang; He, Jia
2016-07-01
That obesity leads to gastroesophageal reflux is a widespread notion. However, scientific evidence for this association is limited, with no rigorous epidemiological approach conducted to address this question. This study examined the relationship between body mass index (BMI) and gastroesophageal reflux symptoms in a large population-representative sample from China. We performed a cross-sectional study in an age- and gender-stratified random sample of the population of five central regions in China. Participants aged 18-80 years completed a general information questionnaire and a Chinese version of the Reflux Disease Questionnaire. The zero-inflated Poisson regression model estimated the relationship between body mass index and gastroesophageal reflux symptoms. Overall, 16,091 (89.4 %) of the 18,000 eligible participants responded. 638 (3.97 %) and 1738 (10.81 %) experienced at least weekly heartburn and weekly acid regurgitation, respectively. After adjusting for potential risk factors in the zero-inflated part, the frequency [odds ratio (OR) 0.66, 95 % confidence interval (95 % CI) 0.50-0.86, p = 0.002] and severity (OR 0.66, 95 % CI 0.50-088, p = 0.004) of heartburn in obese participants were statistically significant compared to those in normal participants. In the Poisson part, the frequency of acid regurgitation, overweight (OR 1.10, 95 % CI 1.01-1.21, p = 0.038) and obesity (OR 1.19, 95 % CI 1.04-1.37, p = 0.013) were statistically significant. BMI was strongly and positively related to the frequency and severity of gastroesophageal reflux symptoms. Additionally, gender exerted strong specific effects on the relationship between BMI and gastroesophageal reflux symptoms. The severity and frequency of heartburn were positively correlated with obesity. This relationship was presented distinct in male participants only.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Dental Caries and Enamel Defects in Very Low Birth Weight Adolescents
Nelson, S.; Albert, J.M.; Lombardi, G.; Wishnek, S.; Asaad, G.; Kirchner, H.L.; Singer, L.T.
2011-01-01
Objectives The purpose of this study was to examine developmental enamel defects and dental caries in very low birth weight adolescents with high risk (HR-VLBW) and low risk (LR-VLBW) compared to full-term (term) adolescents. Methods The sample consisted of 224 subjects (80 HR-VLBW, 59 LR-VLBW, 85 term adolescents) recruited from an ongoing longitudinal study. Sociodemographic and medical information was available from birth. Dental examination of the adolescent at the 14-year visit included: enamel defects (opacity and hypoplasia); decayed, missing, filled teeth of incisors and molars (DMFT-IM) and of overall permanent teeth (DMFT); Simplified Oral Hygiene Index for debris/calculus on teeth, and sealant presence. A caregiver questionnaire completed simultaneously assessed dental behavior, access, insurance status and prevention factors. Hierarchical analysis utilized the zero-inflated negative binomial model and zero-inflated Poisson model. Results The zero-inflated negative binomial model controlling for sociodemographic variables indicated that the LR-VLBW group had an estimated 75% increase (p < 0.05) in number of demarcated opacities in the incisors and first molar teeth compared to the term group. Hierarchical modeling indicated that demarcated opacities were a significant predictor of DMFT-IM after control for relevant covariates. The term adolescents had significantly increased DMFT-IM and DMFT scores compared to the LR-VLBW adolescents. Conclusion LR-VLBW was a significant risk factor for increased enamel defects in the permanent incisors and first molars. Term children had increased caries compared to the LR-VLBW group. The effect of birth group and enamel defects on caries has to be investigated longitudinally from birth. PMID:20975268
Statistical procedures for analyzing mental health services data.
Elhai, Jon D; Calhoun, Patrick S; Ford, Julian D
2008-08-15
In mental health services research, analyzing service utilization data often poses serious problems, given the presence of substantially skewed data distributions. This article presents a non-technical introduction to statistical methods specifically designed to handle the complexly distributed datasets that represent mental health service use, including Poisson, negative binomial, zero-inflated, and zero-truncated regression models. A flowchart is provided to assist the investigator in selecting the most appropriate method. Finally, a dataset of mental health service use reported by medical patients is described, and a comparison of results across several different statistical methods is presented. Implications of matching data analytic techniques appropriately with the often complexly distributed datasets of mental health services utilization variables are discussed.
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
Hierarchical modeling of bycatch rates of sea turtles in the western North Atlantic
Gardner, B.; Sullivan, P.J.; Epperly, S.; Morreale, S.J.
2008-01-01
Previous studies indicate that the locations of the endangered loggerhead Caretta caretta and critically endangered leatherback Dermochelys coriacea sea turtles are influenced by water temperatures, and that incidental catch rates in the pelagic longline fishery vary by region. We present a Bayesian hierarchical model to examine the effects of environmental variables, including water temperature, on the number of sea turtles captured in the US pelagic longline fishery in the western North Atlantic. The modeling structure is highly flexible, utilizes a Bayesian model selection technique, and is fully implemented in the software program WinBUGS. The number of sea turtles captured is modeled as a zero-inflated Poisson distribution and the model incorporates fixed effects to examine region-specific differences in the parameter estimates. Results indicate that water temperature, region, bottom depth, and target species are all significant predictors of the number of loggerhead sea turtles captured. For leatherback sea turtles, the model with only target species had the most posterior model weight, though a re-parameterization of the model indicates that temperature influences the zero-inflation parameter. The relationship between the number of sea turtles captured and the variables of interest all varied by region. This suggests that management decisions aimed at reducing sea turtle bycatch may be more effective if they are spatially explicit. ?? Inter-Research 2008.
Map scale effects on estimating the number of undiscovered mineral deposits
Singer, D.A.; Menzie, W.D.
2008-01-01
Estimates of numbers of undiscovered mineral deposits, fundamental to assessing mineral resources, are affected by map scale. Where consistently defined deposits of a particular type are estimated, spatial and frequency distributions of deposits are linked in that some frequency distributions can be generated by processes randomly in space whereas others are generated by processes suggesting clustering in space. Possible spatial distributions of mineral deposits and their related frequency distributions are affected by map scale and associated inclusions of non-permissive or covered geological settings. More generalized map scales are more likely to cause inclusion of geologic settings that are not really permissive for the deposit type, or that include unreported cover over permissive areas, resulting in the appearance of deposit clustering. Thus, overly generalized map scales can cause deposits to appear clustered. We propose a model that captures the effects of map scale and the related inclusion of non-permissive geologic settings on numbers of deposits estimates, the zero-inflated Poisson distribution. Effects of map scale as represented by the zero-inflated Poisson distribution suggest that the appearance of deposit clustering should diminish as mapping becomes more detailed because the number of inflated zeros would decrease with more detailed maps. Based on observed worldwide relationships between map scale and areas permissive for deposit types, mapping at a scale with twice the detail should cut permissive area size of a porphyry copper tract to 29% and a volcanic-hosted massive sulfide tract to 50% of their original sizes. Thus some direct benefits of mapping an area at a more detailed scale are indicated by significant reductions in areas permissive for deposit types, increased deposit density and, as a consequence, reduced uncertainty in the estimate of number of undiscovered deposits. Exploration enterprises benefit from reduced areas requiring detailed and expensive exploration, and land-use planners benefit from reduced areas of concern. ?? 2008 International Association for Mathematical Geology.
Preisser, John S; Long, D Leann; Stamm, John W
2017-01-01
Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.
Preisser, John S.; Long, D. Leann; Stamm, John W.
2017-01-01
Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962
QMRA for Drinking Water: 2. The Effect of Pathogen Clustering in Single-Hit Dose-Response Models.
Nilsen, Vegard; Wyller, John
2016-01-01
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson-distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional "single-hit" dose-response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose-response models in terms of probability generating functions. It is shown formally that the theoretical single-hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single-hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single-hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose-response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose-response assessment as well as practical risk characterization are discussed. © 2016 Society for Risk Analysis.
ERIC Educational Resources Information Center
You, Jianing; Leung, Freedom
2012-01-01
This study used zero-inflated poisson regression analysis to examine the role of depressive symptoms, family invalidation, and behavioral impulsivity in the occurrence and repetition of non-suicidal self-injury among Chinese community adolescents over a 2-year period. Participants, 4782 high school students, were assessed twice during the…
ERIC Educational Resources Information Center
Magnus, Brooke E.; Thissen, David
2017-01-01
Questionnaires that include items eliciting count responses are becoming increasingly common in psychology. This study proposes methodological techniques to overcome some of the challenges associated with analyzing multivariate item response data that exhibit zero inflation, maximum inflation, and heaping at preferred digits. The modeling…
Bayesian inference for unidirectional misclassification of a binary response trait.
Xia, Michelle; Gustafson, Paul
2018-03-15
When assessing association between a binary trait and some covariates, the binary response may be subject to unidirectional misclassification. Unidirectional misclassification can occur when revealing a particular level of the trait is associated with a type of cost, such as a social desirability or financial cost. The feasibility of addressing misclassification is commonly obscured by model identification issues. The current paper attempts to study the efficacy of inference when the binary response variable is subject to unidirectional misclassification. From a theoretical perspective, we demonstrate that the key model parameters possess identifiability, except for the case with a single binary covariate. From a practical standpoint, the logistic model with quantitative covariates can be weakly identified, in the sense that the Fisher information matrix may be near singular. This can make learning some parameters difficult under certain parameter settings, even with quite large samples. In other cases, the stronger identification enables the model to provide more effective adjustment for unidirectional misclassification. An extension to the Poisson approximation of the binomial model reveals the identifiability of the Poisson and zero-inflated Poisson models. For fully identified models, the proposed method adjusts for misclassification based on learning from data. For binary models where there is difficulty in identification, the method is useful for sensitivity analyses on the potential impact from unidirectional misclassification. Copyright © 2017 John Wiley & Sons, Ltd.
Observation weights unlock bulk RNA-seq tools for zero inflation and single-cell applications.
Van den Berge, Koen; Perraudeau, Fanny; Soneson, Charlotte; Love, Michael I; Risso, Davide; Vert, Jean-Philippe; Robinson, Mark D; Dudoit, Sandrine; Clement, Lieven
2018-02-26
Dropout events in single-cell RNA sequencing (scRNA-seq) cause many transcripts to go undetected and induce an excess of zero read counts, leading to power issues in differential expression (DE) analysis. This has triggered the development of bespoke scRNA-seq DE methods to cope with zero inflation. Recent evaluations, however, have shown that dedicated scRNA-seq tools provide no advantage compared to traditional bulk RNA-seq tools. We introduce a weighting strategy, based on a zero-inflated negative binomial model, that identifies excess zero counts and generates gene- and cell-specific weights to unlock bulk RNA-seq DE pipelines for zero-inflated data, boosting performance for scRNA-seq.
SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)
Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik
2013-01-01
We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
Witte, Susan S; Aira, Toivgoo; Tsai, Laura Cordisco; Riedel, Marion; Offringa, Reid; Chang, Mingway; El-Bassel, Nabila; Ssewamala, Fred
2015-03-01
We tested whether a structural intervention combining savings-led microfinance and HIV prevention components would achieve enhanced reductions in sexual risk among women engaging in street-based sex work in Ulaanbaatar, Mongolia, compared with an HIV prevention intervention alone. Between November 2011 and August 2012, we randomized 107 eligible women who completed baseline assessments to either a 4-session HIV sexual risk reduction intervention (HIVSRR) alone (n=50) or a 34-session HIVSRR plus a savings-led microfinance intervention (n=57). At 3- and 6-month follow-up assessments, participants reported unprotected acts of vaginal intercourse with paying partners and number of paying partners with whom they engaged in sexual intercourse in the previous 90 days. Using Poisson and zero-inflated Poisson model regressions, we examined the effects of assignment to treatment versus control condition on outcomes. At 6-month follow-up, the HIVSRR plus microfinance participants reported significantly fewer paying sexual partners and were more likely to report zero unprotected vaginal sex acts with paying sexual partners. Findings advance the HIV prevention repertoire for women, demonstrating that risk reduction may be achieved through a structural intervention that relies on asset building, including savings, and alternatives to income from sex work.
Wen, L; Bowen, C R; Hartman, G L
2017-10-01
Dispersal of urediniospores by wind is the primary means of spread for Phakopsora pachyrhizi, the cause of soybean rust. Our research focused on the short-distance movement of urediniospores from within the soybean canopy and up to 61 m from field-grown rust-infected soybean plants. Environmental variables were used to develop and compare models including the least absolute shrinkage and selection operator regression, zero-inflated Poisson/regular Poisson regression, random forest, and neural network to describe deposition of urediniospores collected in passive and active traps. All four models identified distance of trap from source, humidity, temperature, wind direction, and wind speed as the five most important variables influencing short-distance movement of urediniospores. The random forest model provided the best predictions, explaining 76.1 and 86.8% of the total variation in the passive- and active-trap datasets, respectively. The prediction accuracy based on the correlation coefficient (r) between predicted values and the true values were 0.83 (P < 0.0001) and 0.94 (P < 0.0001) for the passive and active trap datasets, respectively. Overall, multiple machine learning techniques identified the most important variables to make the most accurate predictions of movement of P. pachyrhizi urediniospores short-distance.
Xiao, Yundan; Zhang, Xiongqing; Ji, Ping
2015-01-01
Forest fires can cause catastrophic damage on natural resources. In the meantime, it can also bring serious economic and social impacts. Meteorological factors play a critical role in establishing conditions favorable for a forest fire. Effective prediction of forest fire occurrences could prevent or minimize losses. This paper uses count data models to analyze fire occurrence data which is likely to be dispersed and frequently contain an excess of zero counts (no fire occurrence). Such data have commonly been analyzed using count data models such as a Poisson model, negative binomial model (NB), zero-inflated models, and hurdle models. Data we used in this paper is collected from Qiannan autonomous prefecture of Guizhou province in China. Using the fire occurrence data from January to April (spring fire season) for the years 1996 through 2007, we introduced random effects to the count data models. In this study, the results indicated that the prediction achieved through NB model provided a more compelling and credible inferential basis for fitting actual forest fire occurrence, and mixed-effects model performed better than corresponding fixed-effects model in forest fire forecasting. Besides, among all meteorological factors, we found that relative humidity and wind speed is highly correlated with fire occurrence.
Ji, Ping
2015-01-01
Forest fires can cause catastrophic damage on natural resources. In the meantime, it can also bring serious economic and social impacts. Meteorological factors play a critical role in establishing conditions favorable for a forest fire. Effective prediction of forest fire occurrences could prevent or minimize losses. This paper uses count data models to analyze fire occurrence data which is likely to be dispersed and frequently contain an excess of zero counts (no fire occurrence). Such data have commonly been analyzed using count data models such as a Poisson model, negative binomial model (NB), zero-inflated models, and hurdle models. Data we used in this paper is collected from Qiannan autonomous prefecture of Guizhou province in China. Using the fire occurrence data from January to April (spring fire season) for the years 1996 through 2007, we introduced random effects to the count data models. In this study, the results indicated that the prediction achieved through NB model provided a more compelling and credible inferential basis for fitting actual forest fire occurrence, and mixed-effects model performed better than corresponding fixed-effects model in forest fire forecasting. Besides, among all meteorological factors, we found that relative humidity and wind speed is highly correlated with fire occurrence. PMID:25790309
Aira, Toivgoo; Tsai, Laura Cordisco; Riedel, Marion; Offringa, Reid; Chang, Mingway; El-Bassel, Nabila; Ssewamala, Fred
2015-01-01
Objectives. We tested whether a structural intervention combining savings-led microfinance and HIV prevention components would achieve enhanced reductions in sexual risk among women engaging in street-based sex work in Ulaanbaatar, Mongolia, compared with an HIV prevention intervention alone. Methods. Between November 2011 and August 2012, we randomized 107 eligible women who completed baseline assessments to either a 4-session HIV sexual risk reduction intervention (HIVSRR) alone (n = 50) or a 34-session HIVSRR plus a savings-led microfinance intervention (n = 57). At 3- and 6-month follow-up assessments, participants reported unprotected acts of vaginal intercourse with paying partners and number of paying partners with whom they engaged in sexual intercourse in the previous 90 days. Using Poisson and zero-inflated Poisson model regressions, we examined the effects of assignment to treatment versus control condition on outcomes. Results. At 6-month follow-up, the HIVSRR plus microfinance participants reported significantly fewer paying sexual partners and were more likely to report zero unprotected vaginal sex acts with paying sexual partners. Conclusions. Findings advance the HIV prevention repertoire for women, demonstrating that risk reduction may be achieved through a structural intervention that relies on asset building, including savings, and alternatives to income from sex work. PMID:25602889
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Nobre, Aline Araújo; Carvalho, Marilia Sá; Griep, Rosane Härter; Fonseca, Maria de Jesus Mendes da; Melo, Enirtes Caetano Prates; Santos, Itamar de Souza; Chor, Dora
2017-08-17
To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil - Estudo Longitudinal de Saúde do Adulto ) have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week) spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income. The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.
Liu, Fang; Eugenio, Evercita C
2018-04-01
Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.
Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed
2016-08-01
This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Three-dimensional Polymer Scaffolding Material Exhibiting a Zero Poisson's Ratio.
Soman, Pranav; Fozdar, David Y; Lee, Jin Woo; Phadke, Ameya; Varghese, Shyni; Chen, Shaochen
2012-05-14
Poisson's ratio describes the degree to which a material contracts (expands) transversally when axially strained. A material with a zero Poisson's ratio does not transversally deform in response to an axial strain (stretching). In tissue engineering applications, scaffolding having a zero Poisson's ratio (ZPR) may be more suitable for emulating the behavior of native tissues and accommodating and transmitting forces to the host tissue site during wound healing (or tissue regrowth). For example, scaffolding with a zero Poisson's ratio may be beneficial in the engineering of cartilage, ligament, corneal, and brain tissues, which are known to possess Poisson's ratios of nearly zero. Here, we report a 3D biomaterial constructed from polyethylene glycol (PEG) exhibiting in-plane Poisson's ratios of zero for large values of axial strain. We use digital micro-mirror device projection printing (DMD-PP) to create single- and double-layer scaffolds composed of semi re-entrant pores whose arrangement and deformation mechanisms contribute the zero Poisson's ratio. Strain experiments prove the zero Poisson's behavior of the scaffolds and that the addition of layers does not change the Poisson's ratio. Human mesenchymal stem cells (hMSCs) cultured on biomaterials with zero Poisson's ratio demonstrate the feasibility of utilizing these novel materials for biological applications which require little to no transverse deformations resulting from axial strains. Techniques used in this work allow Poisson's ratio to be both scale-independent and independent of the choice of strut material for strains in the elastic regime, and therefore ZPR behavior can be imparted to a variety of photocurable biomaterial.
A Zero- and K-Inflated Mixture Model for Health Questionnaire Data
Finkelman, Matthew D.; Green, Jennifer Greif; Gruber, Michael J.; Zaslavsky, Alan M.
2011-01-01
In psychiatric assessment, Item Response Theory (IRT) is a popular tool to formalize the relation between the severity of a disorder and associated responses to questionnaire items. Practitioners of IRT sometimes make the assumption of normally distributed severities within a population; while convenient, this assumption is often violated when measuring psychiatric disorders. Specifically, there may be a sizable group of respondents whose answers place them at an extreme of the latent trait spectrum. In this article, a zero- and K-inflated mixture model is developed to account for the presence of such respondents. The model is fitted using an expectation-maximization (E-M) algorithm to estimate the percentage of the population at each end of the continuum, concurrently analyzing the remaining “graded component” via IRT. A method to perform factor analysis for only the graded component is introduced. In assessments of oppositional defiant disorder and conduct disorder, the zero- and K-inflated model exhibited better fit than the standard IRT model. PMID:21365673
Aldwin, Carolyn M.; Molitor, Nuoo-Ting; Avron, Spiro; Levenson, Michael R.; Molitor, John; Igarashi, Heidi
2011-01-01
We examined long-term patterns of stressful life events (SLE) and their impact on mortality contrasting two theoretical models: allostatic load (linear relationship) and hormesis (inverted U relationship) in 1443 NAS men (aged 41–87 in 1985; M = 60.30, SD = 7.3) with at least two reports of SLEs over 18 years (total observations = 7,634). Using a zero-inflated Poisson growth mixture model, we identified four patterns of SLE trajectories, three showing linear decreases over time with low, medium, and high intercepts, respectively, and one an inverted U, peaking at age 70. Repeating the analysis omitting two health-related SLEs yielded only the first three linear patterns. Compared to the low-stress group, both the moderate and the high-stress groups showed excess mortality, controlling for demographics and health behavior habits, HRs = 1.42 and 1.37, ps <.01 and <.05. The relationship between stress trajectories and mortality was complex and not easily explained by either theoretical model. PMID:21961066
Hüls, Anke; Frömke, Cornelia; Ickstadt, Katja; Hille, Katja; Hering, Johanna; von Münchhausen, Christiane; Hartmann, Maria; Kreienbrock, Lothar
2017-01-01
Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i) to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model) and (ii) to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate model. PMID:28620609
Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series
NASA Astrophysics Data System (ADS)
Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.
2018-03-01
Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.
Kelley, Mary E.; Anderson, Stewart J.
2008-01-01
Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711
Fulton, Kara A.; Liu, Danping; Haynie, Denise L.; Albert, Paul S.
2016-01-01
The NEXT Generation Health study investigates the dating violence of adolescents using a survey questionnaire. Each student is asked to affirm or deny multiple instances of violence in his/her dating relationship. There is, however, evidence suggesting that students not in a relationship responded to the survey, resulting in excessive zeros in the responses. This paper proposes likelihood-based and estimating equation approaches to analyze the zero-inflated clustered binary response data. We adopt a mixed model method to account for the cluster effect, and the model parameters are estimated using a maximum-likelihood (ML) approach that requires a Gaussian–Hermite quadrature (GHQ) approximation for implementation. Since an incorrect assumption on the random effects distribution may bias the results, we construct generalized estimating equations (GEE) that do not require the correct specification of within-cluster correlation. In a series of simulation studies, we examine the performance of ML and GEE methods in terms of their bias, efficiency and robustness. We illustrate the importance of properly accounting for this zero inflation by reanalyzing the NEXT data where this issue has previously been ignored. PMID:26937263
Novel Phenotype Issues Raised in Cross-National Epidemiological Research on Drug Dependence
Anthony, James C.
2010-01-01
Stage-transition models based on the American Diagnostic and Statistical Manual (DSM) generally are applied in epidemiology and genetics research on drug dependence syndromes associated with cannabis, cocaine, and other internationally regulated drugs (IRD). Difficulties with DSM stage-transition models have surfaced during cross-national research intended to provide a truly global perspective, such as the work of the World Mental Health Surveys (WMHS) Consortium. Alternative simpler dependence-related phenotypes are possible, including population-level count process models for steps early and before coalescence of clinical features into a coherent syndrome (e.g., zero-inflated Poisson regression). Selected findings are reviewed, based on ZIP modeling of alcohol, tobacco, and IRD count processes, with an illustration that may stimulate new research on genetic susceptibility traits. The annual National Surveys on Drug Use and Health can be readily modified for this purpose, along the lines of a truly anonymous research approach that can help make NSDUH-type cross-national epidemiological surveys more useful in the context of subsequent genome wide association (GWAS) research and post-GWAS investigations with a truly global health perspective. PMID:20201862
Tremblay, Marlène; Crim, Stacy M; Cole, Dana J; Hoekstra, Robert M; Henao, Olga L; Döpfer, Dörte
2017-10-01
The Foodborne Diseases Active Surveillance Network (FoodNet) is currently using a negative binomial (NB) regression model to estimate temporal changes in the incidence of Campylobacter infection. FoodNet active surveillance in 483 counties collected data on 40,212 Campylobacter cases between years 2004 and 2011. We explored models that disaggregated these data to allow us to account for demographic, geographic, and seasonal factors when examining changes in incidence of Campylobacter infection. We hypothesized that modeling structural zeros and including demographic variables would increase the fit of FoodNet's Campylobacter incidence regression models. Five different models were compared: NB without demographic covariates, NB with demographic covariates, hurdle NB with covariates in the count component only, hurdle NB with covariates in both zero and count components, and zero-inflated NB with covariates in the count component only. Of the models evaluated, the nonzero-augmented NB model with demographic variables provided the best fit. Results suggest that even though zero inflation was not present at this level, individualizing the level of aggregation and using different model structures and predictors per site might be required to correctly distinguish between structural and observational zeros and account for risk factors that vary geographically.
Frequency distribution of Echinococcus multilocularis and other helminths of foxes in Kyrgyzstan
I., Ziadinov; P., Deplazes; A., Mathis; B., Mutunova; K., Abdykerimov; R., Nurgaziev; P.R, Torgerson
2010-01-01
Echinococcosis is a major emerging zoonosis in central Asia. A study of the helminth fauna of foxes from Naryn Oblast in central Kyrgyzstan was undertaken to investigate the abundance of Echinococcus multilocularis in a district where a high prevalence of this parasite had previously been detected in dogs. A total of 151 foxes (Vulpes vulpes) were investigated in a necropsy study. Of these 96 (64%) were infected with E. multilocularis with a mean abundance of 8669 parasites per fox. This indicates that red foxes are a major definitive host of E. multilocularis in this country. This also demonstrates that the abundance and prevalence of E. multilocularis in the natural definitive host are likely to be high in geographical regions where there is a concomitant high prevalence in alternative definitive hosts such as dogs. In addition Mesocestoides spp., Dipylidium caninum, Taenia spp., Toxocara canis, Toxascaris leonina, Capillaria and Acanthocephala spp. were found in 99 (66%), 50 (33%), 48 (32%), 46 (30%), 9 (6%), 34 (23%) and 2 (1%) of foxes, respectively. The prevalence but not the abundance of E. multilocularis decreased with age. The abundance of Dipylidium caninum also decreased with age. The frequency distribution of E. multilocularis and Mesocestoides spp. followed a zero inflated negative binomial distribution, whilst all other helminths had a negative binomial distribution. This demonstrates that the frequency distribution of positive counts and not just the frequency of zeros in the data set can determine if a zero inflated or non-zero inflated model is more appropriate. This is because the prevalences of E. multolocularis and Mesocestoides spp. were the highest (and hence had fewest zero counts) yet the parasite distribution nevertheless gave a better fit to the zero inflated models. PMID:20434845
Narukawa, Masaki; Nohara, Katsuhito
2018-04-01
This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.
Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements
NASA Astrophysics Data System (ADS)
Mukherjee, Suvodip; Das, Santanu; Joy, Minu; Souradeep, Tarun
2015-01-01
Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass meff for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation νt and change in spectral index for scalar perturbation νst to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of ns=0.96 by having a non-zero value of effective mass of the inflaton field m2eff/H2. The analysis with WP + Planck likelihood shows a non-zero detection of m2eff/H2 with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m2eff/H2 = -0.0237 ± 0.0135 which is consistent with zero.
Wildhaber, Mark L.; Yang, Wen-Hsi; Arab, Ali
2016-01-01
A baseline assessment of the Missouri River fish community and species-specific habitat use patterns conducted from 1996 to 1998 provided the first comprehensive analysis of Missouri River benthic fish population trends and habitat use in the Missouri and Lower Yellowstone rivers, exclusive of reservoirs, and provided the foundation for the present Pallid Sturgeon Population Assessment Program (PSPAP). Data used in such studies are frequently zero inflated. To address this issue, the zero-inflated Poisson (ZIP) model was applied. This follow-up study is based on PSPAP data collected up to 15 years later along with new understanding of how habitat characteristics among and within bends affect habitat use of fish species targeted by PSPAP, including pallid sturgeon. This work demonstrated that a large-scale, large-river, PSPAP-type monitoring program can be an effective tool for assessing population trends and habitat usage of large-river fish species. Using multiple gears, PSPAP was effective in monitoring shovelnose and pallid sturgeons, sicklefin, shoal and sturgeon chubs, sand shiner, blue sucker and sauger. For all species, the relationship between environmental variables and relative abundance differed, somewhat, among river segments suggesting the importance of the overall conditions of Upper and Middle Missouri River and Lower Missouri and Kansas rivers on the habitat usage patterns exhibited. Shoal and sicklefin chubs exhibited many similar habitat usage patterns; blue sucker and shovelnose sturgeon also shared similar responses. For pallid sturgeon, the primary focus of PSPAP, relative abundance tended to increase in Upper and Middle Missouri River paralleling stocking efforts, whereas no evidence of an increasing relative abundance was found in the Lower Missouri River despite stocking.
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
Analysis of multinomial models with unknown index using data augmentation
Royle, J. Andrew; Dorazio, R.M.; Link, W.A.
2007-01-01
Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.
Drake, D Andrew R; Mandrak, Nicholas E
2014-06-01
Long implicated in the invasion process, live-bait anglers are highly mobile species vectors with frequent overland transport of fishes. To test hypotheses about the role of anglers in propagule transport, we developed a social-ecological model quantifying the opportunity for species transport beyond the invaded range resulting from bycatch during commercial bait operations, incidental transport, and release to lake ecosystems by anglers. We combined a gravity model with a stochastic, agent-based simulation, representing a 1-yr iteration of live-bait angling and the dynamics of propagule transport at fine spatiotemporal scales (i.e., probability of introducing n propagules per lake per year). A baseline scenario involving round goby (Neogobius melanostomus) indicated that most angling trips were benign; irrespective of lake visitation, anglers failed to purchase and transport propagules (benign trips, median probability P = 0.99912). However, given the large number of probability trials (4.2 million live-bait angling events per year), even the rarest sequence of events (uptake, movement, and deposition of propagules) is anticipated to occur. Risky trips (modal P = 0.00088 trips per year; approximately 1 in 1136) were sufficient to introduce a substantial number of propagules (modal values, Poisson model = 3715 propagules among 1288 lakes per year; zero-inflated negative binomial model = 6722 propagules among 1292 lakes per year). Two patterns of lake-specific introduction risk emerged. Large lakes supporting substantial angling activity experienced propagule pressure likely to surpass demographic barriers to establishment (top 2.5% of lakes with modal outcomes of five to 76 propagules per year; 303 high-risk lakes with three or more propagules, per year). Small or remote lakes were less likely to receive propagules; however, most risk distributions were leptokurtic with a long right tail, indicating the rare occurrence of high propagule loads to most waterbodies. Infestation simulations indicated that the number of high-risk waterbodies could be as great as 1318 (zero-inflated negative binomial), whereas a 90% reduction in bycatch from baseline would reduce the modal number of high risk lakes to zero. Results indicate that the combination of invasive bycatch and live-bait anglers warrants management concern as a species vector, but that risk is confined to a subset of individuals and recipient sites that may be effectively managed with targeted strategies.
Cox, Ronald B; Criss, Michael M; Harrist, Amanda W; Zapata-Roblyer, Martha
2017-10-01
Most studies tend to characterize peer influences as either positive or negative. In a sample of 1815 youth from 14 different schools in Caracas, Venezuela, we explored how two types of peer affiliations (i.e., deviant and drug-using peers) differentially mediated the paths from positive parenting to youth's externalizing behavior and licit and illicit drug use. We used Zero Inflated Poisson models to test the probability of use and the extent of use during the past 12 months. Results suggested that peer influences are domain specific among Venezuelan youth. That is, deviant peer affiliations mediated the path from positive parenting to youth externalizing behaviors, and peer drug-using affiliations mediated the paths to the drug use outcomes. Mediation effects were partial, suggesting that parenting explained unique variance in the outcomes after accounting for both peer variables, gender, and age. We discuss implications for the development of screening tools and for prevention interventions targeting adolescents from different cultures.
Family size and old-age wellbeing: effects of the fertility transition in Mexico
DÍAZ-VENEGAS, CARLOS; SÁENZ, JOSEPH L.; WONG, REBECA
2016-01-01
The present study aims to determine how family size affects psycho-social, economic and health wellbeing in old age differently across two cohorts with declining fertility. The data are from the 2012 Mexican Health and Ageing Study (MHAS) including respondents aged 50+ (N = 13,102). Poisson (standard and zero-inflated) and logistic regressions are used to model determinants of wellbeing in old age: psycho-social (depressive symptoms), economic (consumer durables and insurance) and health (chronic conditions). In the younger cohort, having fewer children is associated with fewer depressive symptoms and chronic conditions, and better economic well-being. For the older cohort, having fewer children is associated with lower economic wellbeing and higher odds of being uninsured. Lower fertility benefited the younger cohort (born after 1937), whereas the older cohort (born in 1937 or earlier) benefited from lower fertility only in chronic conditions. Further research is needed to continue exploring the old-age effects of the fertility transition. PMID:28239210
Amponsah-Tawiah, Kwesi; Jain, Aditya; Leka, Stavroula; Hollis, David; Cox, Tom
2013-06-01
In addition to hazardous conditions that are prevalent in mines, there are various physical and psychosocial risk factors that can affect mine workers' safety and health. Without due diligence to mine safety, these risk factors can affect workers' safety experience, in terms of near misses, disabling injuries and accidents experienced or witnessed by workers. This study sets out to examine the effects of physical and psychosocial risk factors on workers' safety experience in a sample of Ghanaian miners. 307 participants from five mining companies responded to a cross sectional survey examining physical and psychosocial hazards and their implications for employees' safety experience. Zero-inflated Poisson regression models indicated that mining conditions, equipment, ambient conditions, support and security, and work demands and control are significant predictors of near misses, disabling injuries, and accidents experienced or witnessed by workers. The type of mine had important implications for workers' safety experience. Copyright © 2013 Elsevier Ltd and National Safety Council. All rights reserved.
A smooth exit from eternal inflation?
NASA Astrophysics Data System (ADS)
Hawking, S. W.; Hertog, Thomas
2018-04-01
The usual theory of inflation breaks down in eternal inflation. We derive a dual description of eternal inflation in terms of a deformed Euclidean CFT located at the threshold of eternal inflation. The partition function gives the amplitude of different geometries of the threshold surface in the no-boundary state. Its local and global behavior in dual toy models shows that the amplitude is low for surfaces which are not nearly conformal to the round three-sphere and essentially zero for surfaces with negative curvature. Based on this we conjecture that the exit from eternal inflation does not produce an infinite fractal-like multiverse, but is finite and reasonably smooth.
On the Singularity of the Vlasov-Poisson System
DOE Office of Scientific and Technical Information (OSTI.GOV)
and Hong Qin, Jian Zheng
2013-04-26
The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.
On the singularity of the Vlasov-Poisson system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Jian; Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08550
2013-09-15
The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.
Population heterogeneity in the salience of multiple risk factors for adolescent delinquency.
Lanza, Stephanie T; Cooper, Brittany R; Bray, Bethany C
2014-03-01
To present mixture regression analysis as an alternative to more standard regression analysis for predicting adolescent delinquency. We demonstrate how mixture regression analysis allows for the identification of population subgroups defined by the salience of multiple risk factors. We identified population subgroups (i.e., latent classes) of individuals based on their coefficients in a regression model predicting adolescent delinquency from eight previously established risk indices drawn from the community, school, family, peer, and individual levels. The study included N = 37,763 10th-grade adolescents who participated in the Communities That Care Youth Survey. Standard, zero-inflated, and mixture Poisson and negative binomial regression models were considered. Standard and mixture negative binomial regression models were selected as optimal. The five-class regression model was interpreted based on the class-specific regression coefficients, indicating that risk factors had varying salience across classes of adolescents. Standard regression showed that all risk factors were significantly associated with delinquency. Mixture regression provided more nuanced information, suggesting a unique set of risk factors that were salient for different subgroups of adolescents. Implications for the design of subgroup-specific interventions are discussed. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Suvodip; Das, Santanu; Souradeep, Tarun
2015-01-01
Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass m{sub eff} for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independentmore » parameters, namely spectral index for tensor perturbation ν{sub t} and change in spectral index for scalar perturbation ν{sub st} to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of n{sub s}=0.96 by having a non-zero value of effective mass of the inflaton field m{sup 2}{sub eff}/H{sup 2}. The analysis with WP + Planck likelihood shows a non-zero detection of m{sup 2}{sub eff}/H{sup 2} with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m{sup 2}{sub eff}/H{sup 2} = −0.0237 ± 0.0135 which is consistent with zero.« less
Validation and Improvement of Reliability Methods for Air Force Building Systems
focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that
Chriqui, Jamie F; Taber, Daniel R; Slater, Sandy J; Turner, Lindsey; Lowrey, Kerri McGowan; Chaloupka, Frank J
2012-01-01
This study examined the relationship between state laws requiring minimum bussing distances, hazardous route exemptions, sidewalks, crossing guards, speed zones, and traffic control measures around schools and active travel to school (ATS) policies/practices in nationally representative samples of U.S. public elementary schools between 2007-2009. The state laws and school data were compiled through primary legal research and annual mail-back surveys of principals, respectively. Multivariate logistic and zero-inflated poisson regression indicated that all state law categories (except for sidewalks) relate to ATS. These laws should be considered in addition to formal safe routes to school programs as possible influences on ATS. Copyright © 2011 Elsevier Ltd. All rights reserved.
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco
2016-03-01
We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Bostan, Nilay; Güleryüz, Ömer; Nefer Şenoğuz, Vedat
2018-05-01
We discuss how the non-minimal coupling ξphi2R between the inflaton and the Ricci scalar affects the predictions of single field inflation models where the inflaton has a non-zero vacuum expectation value (VEV) v after inflation. We show that, for inflaton values both above the VEV and below the VEV during inflation, under certain conditions the inflationary predictions become approximately the same as the predictions of the Starobinsky model. We then analyze inflation with double-well and Coleman-Weinberg potentials in detail, displaying the regions in the v-ξ plane for which the spectral index ns and the tensor-to-scalar ratio r values are compatible with the current observations. r is always larger than 0.002 in these regions. Finally, we consider the effect of ξ on small field inflation (hilltop) potentials.
Li, Liang; Ma, Lian; Schrieber, Sarah J; Rahman, Nam Atiqur; Deisseroth, Albert; Farrell, Ann T; Wang, Yaning; Sinha, Vikram; Marathe, Anshu
2018-02-02
The aim of the study was to evaluate the quantitative relationship between duration of severe neutropenia (DSN, the efficacy endpoint) and area under effect curve of absolute neutrophil counts (ANC-AUEC, the pharmacodynamic endpoint), based on data from filgrastim products, a human granulocyte colony-stimulating factor (G-CSF). Clinical data from filgrastim product comparator and test arms of two randomized, parallel-group, phase III studies in breast cancer patients treated with myelosuppressive chemotherapy were utilized. A zero-inflated Poisson regression model best described the negative correlation between DSN and ANC-AUEC. The models predicted that with 10 × 10 9 day/L of increase in ANC-AUEC, the mean DSN would decrease from 1.1 days to 0.93 day in Trial 1 and from 1.2 days to 1.0 day in Trial 2. The findings of the analysis provide useful information regarding the relationship between ANC and DSN that can be used for dose selection and optimization of clinical trial design for G-CSF. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
2013-01-01
Background High-throughput RNA sequencing (RNA-seq) offers unprecedented power to capture the real dynamics of gene expression. Experimental designs with extensive biological replication present a unique opportunity to exploit this feature and distinguish expression profiles with higher resolution. RNA-seq data analysis methods so far have been mostly applied to data sets with few replicates and their default settings try to provide the best performance under this constraint. These methods are based on two well-known count data distributions: the Poisson and the negative binomial. The way to properly calibrate them with large RNA-seq data sets is not trivial for the non-expert bioinformatics user. Results Here we show that expression profiles produced by extensively-replicated RNA-seq experiments lead to a rich diversity of count data distributions beyond the Poisson and the negative binomial, such as Poisson-Inverse Gaussian or Pólya-Aeppli, which can be captured by a more general family of count data distributions called the Poisson-Tweedie. The flexibility of the Poisson-Tweedie family enables a direct fitting of emerging features of large expression profiles, such as heavy-tails or zero-inflation, without the need to alter a single configuration parameter. We provide a software package for R called tweeDEseq implementing a new test for differential expression based on the Poisson-Tweedie family. Using simulations on synthetic and real RNA-seq data we show that tweeDEseq yields P-values that are equally or more accurate than competing methods under different configuration parameters. By surveying the tiny fraction of sex-specific gene expression changes in human lymphoblastoid cell lines, we also show that tweeDEseq accurately detects differentially expressed genes in a real large RNA-seq data set with improved performance and reproducibility over the previously compared methodologies. Finally, we compared the results with those obtained from microarrays in order to check for reproducibility. Conclusions RNA-seq data with many replicates leads to a handful of count data distributions which can be accurately estimated with the statistical model illustrated in this paper. This method provides a better fit to the underlying biological variability; this may be critical when comparing groups of RNA-seq samples with markedly different count data distributions. The tweeDEseq package forms part of the Bioconductor project and it is available for download at http://www.bioconductor.org. PMID:23965047
The Effect of Reactive Oxygen Species on Embryo Quality in IVF.
Siristatidis, Charalampos; Vogiatzi, Paraskevi; Varounis, Christos; Askoxylaki, Marily; Chrelias, Charalampos; Papantoniou, Nikolaos
2016-01-01
BACKROUND/AIM: Reactive oxygen species (ROS) are involved in critical biological processes in human reproduction. The aim of this study was to evaluate the association of embryo quality following in vitro fertilization (IVF), with ROS levels in the serum and follicular fluid (FF). Eighty-five participants underwent ovarian stimulation and IVF; ROS levels were measured in blood samples on the day of oocyte retrieval and in the FF from follicular aspirates using enzyme-linked immunosorbent assay. These values were associated with the quality of embryos generated. Univariable zero-inflated Poisson model revealed that ROS levels at both oocyte retrieval and in FF were not associated with the number of grade I, II, III and IV embryos (p>0.05). Age, body mass index, stimulation protocol and smoking status were not associated with the number of embryos of any grade (p>0.05). Neither ROS levels in serum nor in FF are associated with the quality of embryos produced following IVF. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Zero-truncated negative binomial - Erlang distribution
NASA Astrophysics Data System (ADS)
Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana
2017-11-01
The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.
Predictors for the Number of Warning Information Sources During Tornadoes.
Cong, Zhen; Luo, Jianjun; Liang, Daan; Nejat, Ali
2017-04-01
People may receive tornado warnings from multiple information sources, but little is known about factors that affect the number of warning information sources (WISs). This study examined predictors for the number of WISs with a telephone survey on randomly sampled residents in Tuscaloosa, Alabama, and Joplin, Missouri, approximately 1 year after both cities were struck by violent tornadoes (EF4 and EF5) in 2011. The survey included 1006 finished interviews and the working sample included 903 respondents. Poisson regression and Zero-Inflated Poisson regression showed that older age and having an emergency plan predicted more WISs in both cities. Education, marital status, and gender affected the possibilities of receiving warnings and the number of WISs either in Joplin or in Tuscaloosa. The findings suggest that social disparity affects the access to warnings not only with respect to the likelihood of receiving any warnings but also with respect to the number of WISs. In addition, historical and social contexts are important for examining predictors for the number of WISs. We recommend that the number of WISs should be regarded as an important measure to evaluate access to warnings in addition to the likelihood of receiving warnings. (Disaster Med Public Health Preparedness. 2017;11:168-172).
NASA Astrophysics Data System (ADS)
Musenge, Eustasius; Chirwa, Tobias Freeman; Kahn, Kathleen; Vounatsou, Penelope
2013-06-01
Longitudinal mortality data with few deaths usually have problems of zero-inflation. This paper presents and applies two Bayesian models which cater for zero-inflation, spatial and temporal random effects. To reduce the computational burden experienced when a large number of geo-locations are treated as a Gaussian field (GF) we transformed the field to a Gaussian Markov Random Fields (GMRF) by triangulation. We then modelled the spatial random effects using the Stochastic Partial Differential Equations (SPDEs). Inference was done using a computationally efficient alternative to Markov chain Monte Carlo (MCMC) called Integrated Nested Laplace Approximation (INLA) suited for GMRF. The models were applied to data from 71,057 children aged 0 to under 10 years from rural north-east South Africa living in 15,703 households over the years 1992-2010. We found protective effects on HIV/TB mortality due to greater birth weight, older age and more antenatal clinic visits during pregnancy (adjusted RR (95% CI)): 0.73(0.53;0.99), 0.18(0.14;0.22) and 0.96(0.94;0.97) respectively. Therefore childhood HIV/TB mortality could be reduced if mothers are better catered for during pregnancy as this can reduce mother-to-child transmissions and contribute to improved birth weights. The INLA and SPDE approaches are computationally good alternatives in modelling large multilevel spatiotemporal GMRF data structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gim, Yongwan; Kim, Wontae, E-mail: yongwan89@sogang.ac.kr, E-mail: wtkim@sogang.ac.kr
In warm inflation scenarios, radiation always exists, so that the radiation energy density is also assumed to be finite when inflation starts. To find out the origin of the non-vanishing initial radiation energy density, we revisit thermodynamic analysis for a warm inflation model and then derive an effective Stefan-Boltzmann law which is commensurate with the temperature-dependent effective potential by taking into account the non-vanishing trace of the total energy-momentum tensors. The effective Stefan-Boltzmann law shows that the zero energy density for radiation at the Grand Unification epoch increases until the inflation starts and it becomes eventually finite at the initialmore » stage of warm inflation. By using the above effective Stefan-Boltzmann law, we also study the cosmological scalar perturbation, and obtain the sufficient radiation energy density in order for GUT baryogenesis at the end of inflation.« less
Coronary artery calcium distributions in older persons in the AGES-Reykjavik study
Gudmundsson, Elias Freyr; Gudnason, Vilmundur; Sigurdsson, Sigurdur; Launer, Lenore J.; Harris, Tamara B.; Aspelund, Thor
2013-01-01
Coronary Artery Calcium (CAC) is a sign of advanced atherosclerosis and an independent risk factor for cardiac events. Here, we describe CAC-distributions in an unselected aged population and compare modelling methods to characterize CAC-distribution. CAC is difficult to model because it has a skewed and zero inflated distribution with over-dispersion. Data are from the AGES-Reykjavik sample, a large population based study [2002-2006] in Iceland of 5,764 persons aged 66-96 years. Linear regressions using logarithmic- and Box-Cox transformations on CAC+1, quantile regression and a Zero-Inflated Negative Binomial model (ZINB) were applied. Methods were compared visually and with the PRESS-statistic, R2 and number of detected associations with concurrently measured variables. There were pronounced differences in CAC according to sex, age, history of coronary events and presence of plaque in the carotid artery. Associations with conventional coronary artery disease (CAD) risk factors varied between the sexes. The ZINB model provided the best results with respect to the PRESS-statistic, R2, and predicted proportion of zero scores. The ZINB model detected similar numbers of associations as the linear regression on ln(CAC+1) and usually with the same risk factors. PMID:22990371
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Genericness of inflation in isotropic loop quantum cosmology.
Date, Ghanashyam; Hossain, Golam Mortuza
2005-01-14
Nonperturbative corrections from loop quantum cosmology (LQC) to the scalar matter sector are already known to imply inflation. We prove that the LQC modified scalar field generates exponential inflation in the small scale factor regime, for all positive definite potentials, independent of initial conditions and independent of ambiguity parameters. For positive semidefinite potentials it is always possible to choose, without fine-tuning, a value of one of the ambiguity parameters such that exponential inflation results, provided zeros of the potential are approached at most as a power law in the scale factor. In conjunction with the generic occurrence of bounce at small volumes, particle horizon is absent, thus eliminating the horizon problem of the standard big bang model.
Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun
2018-01-01
Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.
Wildhaber, M.L.; Gladish, D.W.; Arab, A.
2011-01-01
Past and present Missouri River management practices have resulted in native fishes being identified as in jeopardy. In 1995, the Missouri River Benthic Fishes Study was initiated to provide improved information on Missouri River fish populations and how alterations might affect them. The study produced a baseline against which to evaluate future changes in Missouri River operating criteria. The objective was to evaluate population structure and habitat use of benthic fishes along the entire mainstem Missouri River, exclusive of reservoirs. Here we use the data from this study to provide a recent-past baseline for on-going Missouri River fish population monitoring programmes along with a more powerful method for analysing data containing large percentages of zero values. This is carried out by describing the distribution and habitat use of 21 species of Missouri River benthic fishes based on catch-per-unit area data from multiple gears. We employ a Bayesian zero-inflated Poisson model expanded to include continuous measures of habitat quality (i.e. substrate composition, depth, velocity, temperature, turbidity and conductivity). Along with presenting the method, we provide a relatively complete picture of the Missouri River benthic fish community and the relationship between their relative population numbers and habitat conditions. We demonstrate that our single model provides all the information that is often obtained by a myriad of analytical techniques. An important advantage of the present approach is reliable inference for patterns of relative abundance using multiple gears without using gear efficiencies.
Hosseinpour, Mehdi; Pour, Mehdi Hossein; Prasetijo, Joewono; Yahaya, Ahmad Shukri; Ghadiri, Seyed Mohammad Reza
2013-01-01
The objective of this study was to examine the effects of various roadway characteristics on the incidence of pedestrian-vehicle crashes by developing a set of crash prediction models on 543 km of Malaysia federal roads over a 4-year time span between 2007 and 2010. Four count models including the Poisson, negative binomial (NB), hurdle Poisson (HP), and hurdle negative binomial (HNB) models were developed and compared to model the number of pedestrian crashes. The results indicated the presence of overdispersion in the pedestrian crashes (PCs) and showed that it is due to excess zero rather than variability in the crash data. To handle the issue, the hurdle Poisson model was found to be the best model among the considered models in terms of comparative measures. Moreover, the variables average daily traffic, heavy vehicle traffic, speed limit, land use, and area type were significantly associated with PCs.
Peeters, Margot; Wiers, Reinout W; Monshouwer, Karin; van de Schoot, Rens; Janssen, Tim; Vollebergh, Wilma A M
2012-11-01
This study examined the association between automatic processes and drinking behavior in relation to individual differences in response inhibition in young adolescents who had just started drinking. It was hypothesized that strong automatic behavioral tendencies toward alcohol-related stimuli (alcohol-approach bias) were associated with higher levels of alcohol use, especially amongst adolescents with relatively weak inhibition skills. To test this hypothesis structural equation analyses (standard error of mean) were performed using a zero inflated Poisson (ZIP) model. A well-known problem in studying risk behavior is the low incidence rate resulting in a zero dominated distribution. A ZIP-model accounts for non-normality of the data. Adolescents were selected from secondary Special Education schools (a risk group for the development of substance use problems). Participants were 374 adolescents (mean age of M = 13.6 years). Adolescents completed the alcohol approach avoidance task (a-AAT), the Stroop colour naming task (Stroop) and a questionnaire that assessed alcohol use. The ZIP-model established stronger alcohol-approach tendencies for adolescent drinkers (P < 0.01) and the interaction revealed a stronger effect of alcohol-approach tendencies on alcohol use in the absence of good inhibition skills (P < 0.05). Automatically-activated cognitive processes are associated with the drinking behavior of young, at-risk adolescents. It appears that alcohol-approach tendencies are formed shortly after the initiation of drinking and particularly affect the drinking behavior of adolescents with relatively weak inhibition skills. Implications for the prevention of problem drinking in adolescents are discussed. © 2012 The Authors. Addiction © 2012 Society for the Study of Addiction.
Ulissi, Zachary W; Govind Rajan, Ananth; Strano, Michael S
2016-08-23
Entropic surfaces represented by fluctuating two-dimensional (2D) membranes are predicted to have desirable mechanical properties when unstressed, including a negative Poisson's ratio ("auxetic" behavior). Herein, we present calculations of the strain-dependent Poisson ratio of self-avoiding 2D membranes demonstrating desirable auxetic properties over a range of mechanical strain. Finite-size membranes with unclamped boundary conditions have positive Poisson's ratio due to spontaneous non-zero mean curvature, which can be suppressed with an explicit bending rigidity in agreement with prior findings. Applying longitudinal strain along a singular axis to this system suppresses this mean curvature and the entropic out-of-plane fluctuations, resulting in a molecular-scale mechanism for realizing a negative Poisson's ratio above a critical strain, with values significantly more negative than the previously observed zero-strain limit for infinite sheets. We find that auxetic behavior persists over surprisingly high strains of more than 20% for the smallest surfaces, with desirable finite-size scaling producing surfaces with negative Poisson's ratio over a wide range of strains. These results promise the design of surfaces and composite materials with tunable Poisson's ratio by prestressing platelet inclusions or controlling the surface rigidity of a matrix of 2D materials.
NASA Astrophysics Data System (ADS)
Sarfatti, Jack; Levit, Creon
2009-06-01
We present a model for the origin of gravity, dark energy and dark matter: Dark energy and dark matter are residual pre-inflation false vacuum random zero point energy (w = - 1) of large-scale negative, and short-scale positive pressure, respectively, corresponding to the "zero point" (incoherent) component of a superfluid (supersolid) ground state. Gravity, in contrast, arises from the 2nd order topological defects in the post-inflation virtual "condensate" (coherent) component. We predict, as a consequence, that the LHC will never detect exotic real on-mass-shell particles that can explain dark matter ΩMDM approx 0.23. We also point out that the future holographic dark energy de Sitter horizon is a total absorber (in the sense of retro-causal Wheeler-Feynman action-at-a-distance electrodynamics) because it is an infinite redshift surface for static detectors. Therefore, the advanced Hawking-Unruh thermal radiation from the future de Sitter horizon is a candidate for the negative pressure dark vacuum energy.
Inflation and dark energy from f(R) gravity
NASA Astrophysics Data System (ADS)
Artymowski, Michał; Lalak, Zygmunt
2014-09-01
The standard Starobinsky inflation has been extended to the R + α Rn - β R2-n model to obtain a stable minimum of the Einstein frame scalar potential of the auxiliary field. As a result we have obtained obtain a scalar potential with non-zero value of residual vacuum energy, which may be a source of Dark Energy. Our results can be easily consistent with PLANCK or BICEP2 data for appropriate choices of the value of n.
Data driven CAN node reliability assessment for manufacturing system
NASA Astrophysics Data System (ADS)
Zhang, Leiming; Yuan, Yong; Lei, Yong
2017-01-01
The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.
Neelon, Brian; O'Malley, A James; Smith, Valerie A
2016-11-30
Health services data often contain a high proportion of zeros. In studies examining patient hospitalization rates, for instance, many patients will have no hospitalizations, resulting in a count of zero. When the number of zeros is greater or less than expected under a standard count model, the data are said to be zero modified relative to the standard model. A similar phenomenon arises with semicontinuous data, which are characterized by a spike at zero followed by a continuous distribution with positive support. When analyzing zero-modified count and semicontinuous data, flexible mixture distributions are often needed to accommodate both the excess zeros and the typically skewed distribution of nonzero values. Various models have been introduced over the past three decades to accommodate such data, including hurdle models, zero-inflated models, and two-part semicontinuous models. This tutorial describes recent modeling strategies for zero-modified count and semicontinuous data and highlights their role in health services research studies. Part 1 of the tutorial, presented here, provides a general overview of the topic. Part 2, appearing as a companion piece in this issue of Statistics in Medicine, discusses three case studies illustrating applications of the methods to health services research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Screening Adolescents in the Emergency Department for Weapon Carriage
Cunningham, Rebecca M.; Resko, Stella M.; Harrison, Stephanie Roahen; Zimmerman, Marc; Stanley, Rachel; Chermack, Stephen T.; Walton, Maureen A.
2010-01-01
Objective To describe prevalence and correlates of past year weapon involvement among adolescents seeking care in an inner-city ED. Methods This cross-sectional study administered a computerized survey to all eligible adolescents (age 14–18), seven days a week seeking care in the ED over an 18 month period in an inner-city Level 1 ED. Validated measures were administered including measures of demographics, sexual activity, substance use, injury, violent behavior and weapon carriage/use. Results Adolescents (N=2069, 86% response rate) completed the computerized survey. 55% were female; 56.5% were African American. In the past year, 20% of adolescents reported knife/razor carriage, 7% reported gun carriage, and 6% pulled a knife/gun on someone; zero-inflated Poisson (ZIP) regression models were used to identify correlates of the occurrence and past year frequency of these weapon variables. Although gun carriage was more frequent among males, females were as likely to carry a knife or pull a weapon in the past year. Conclusions One fifth of all adolescent’s seeking care in this inner city ED have carried a weapon. Understanding weapon carriage among teens seeking ED care is a critical first step to future ED based injury prevention initiatives. PMID:20370746
Prescription Drug Misuse and Sexual Behavior Among Young Adults.
Wells, Brooke E; Kelly, Brian C; Rendina, H Jonathon; Parsons, Jeffrey T
2015-01-01
Though research indicates a complex link between substance use and sexual risk behavior, there is limited research on the association between sexual risk behavior and prescription drug misuse. In light of alarming increases in prescription drug misuse and the role of demographic characteristics in sexual risk behavior and outcomes, the current study examined demographic differences (gender, sexual identity, age, relationship status, parental class background, and race/ethnicity) in sexual risk behavior, sexual behavior under the influence of prescription drugs, and sexual risk behavior under the influence of prescription drugs in a sample of 402 young adults (ages 18 to 29) who misused prescription drugs. Nearly half of the sexually active young adult prescription drug misusers in this sample reported recent sex under the influence of prescription drugs; more than three-quarters reported recent sex without a condom; and more than one-third reported recent sex without a condom after using prescription drugs. Zero-inflated Poisson regression models indicated that White race, younger age, higher parental class, and being a heterosexual man were all associated with sexual risk behavior, sex under the influence of prescription drugs, and sexual risk under the influence of prescription drugs. Findings have implications for the targeting of prevention and intervention efforts.
Derefinko, Karen J.; Eisenlohr-Moul, Tory A.; Peters, Jessica R.; Roberts, Walter; Walsh, Erin C.; Milich, Richard; Lynam, Donald R.
2017-01-01
Background Physiological responses to reward and extinction are believed to represent the Behavioral Activation System (BAS) and Behavioral Inhibition System (BIS) constructs of Reinforcement Sensitivity Theory and underlie externalizing behaviors, including substance use. However, little research has examined these relations directly. Methods We assessed individuals’ cardiac pre-ejection periods (PEP) and electrodermal responses (EDR) during reward and extinction trials through the “Number Elimination Game” paradigm. Responses represented BAS and BIS, respectively. We then examined whether these responses provided incremental utility in the prediction of future alcohol, marijuana, and cigarette use. Results Zero-inflated Poisson (ZIP) regression models were used to examine the predictive utility of physiological BAS and BIS responses above and beyond previous substance use. Physiological responses accounted for incremental variance over previous use. Low BAS responses during reward predicted frequency of alcohol use at year 3. Low BAS responses during reward and extinction and high BIS responses during extinction predicted frequency of marijuana use at year 3. For cigarette use, low BAS response during extinction predicted use at year 3. Conclusions These findings suggest that the constructs of Reinforcement Sensitivity Theory, as assessed through physiology, contribute to the longitudinal maintenance of substance use. PMID:27306728
Congdon, Peter
2014-04-01
Health data may be collected across one spatial framework (e.g. health provider agencies), but contrasts in health over another spatial framework (neighbourhoods) may be of policy interest. In the UK, population prevalence totals for chronic diseases are provided for populations served by general practitioner practices, but not for neighbourhoods (small areas of circa 1500 people), raising the question whether data for one framework can be used to provide spatially interpolated estimates of disease prevalence for the other. A discrete process convolution is applied to this end and has advantages when there are a relatively large number of area units in one or other framework. Additionally, the interpolation is modified to take account of the observed neighbourhood indicators (e.g. hospitalisation rates) of neighbourhood disease prevalence. These are reflective indicators of neighbourhood prevalence viewed as a latent construct. An illustrative application is to prevalence of psychosis in northeast London, containing 190 general practitioner practices and 562 neighbourhoods, including an assessment of sensitivity to kernel choice (e.g. normal vs exponential). This application illustrates how a zero-inflated Poisson can be used as the likelihood model for a reflective indicator.
The association between commuter cycling and sickness absence.
Hendriksen, Ingrid J M; Simons, Monique; Garre, Francisca Galindo; Hildebrandt, Vincent H
2010-08-01
To study the association between commuter cycling and all-cause sickness absence, and the possible dose-response relationship between absenteeism and the distance, frequency and speed of commuter cycling. Cross-sectional data about cycling in 1236 Dutch employees were collected using a self-report questionnaire. Company absenteeism records were checked over a one-year period (May 2007-April 2008). Propensity scores were used to make groups comparable and to adjust for confounders. Zero-inflated Poisson models were used to assess differences in absenteeism between cyclists and non-cyclists. The mean total duration of absenteeism over the study year was more than 1 day shorter in cyclists than in non-cyclists. This can be explained by the higher proportion of people with no absenteeism in the cycling group. A dose-response relationship was observed between the speed and distance of cycling and absenteeism. Compared to people who cycle a short distance (
Universality of the Volume Bound in Slow-Roll Eternal Inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubovsky, Sergei; Senatore, Leonardo; Villadoro, Giovanni
2012-03-28
It has recently been shown that in single field slow-roll inflation the total volume cannot grow by a factor larger than e{sup S{sub dS}/2} without becoming infinite. The bound is saturated exactly at the phase transition to eternal inflation where the probability to produce infinite volume becomes non zero. We show that the bound holds sharply also in any space-time dimensions, when arbitrary higher-dimensional operators are included and in the multi-field inflationary case. The relation with the entropy of de Sitter and the universality of the bound strengthen the case for a deeper holographic interpretation. As a spin-off we providemore » the formalism to compute the probability distribution of the volume after inflation for generic multi-field models, which might help to address questions about the population of vacua of the landscape during slow-roll inflation.« less
Moineddin, Rahim; Meaney, Christopher; Agha, Mohammad; Zagorski, Brandon; Glazier, Richard Henry
2011-08-19
Emergency departments are medical treatment facilities, designed to provide episodic care to patients suffering from acute injuries and illnesses as well as patients who are experiencing sporadic flare-ups of underlying chronic medical conditions which require immediate attention. Supply and demand for emergency department services varies across geographic regions and time. Some persons do not rely on the service at all whereas; others use the service on repeated occasions. Issues regarding increased wait times for services and crowding illustrate the need to investigate which factors are associated with increased frequency of emergency department utilization. The evidence from this study can help inform policy makers on the appropriate mix of supply and demand targeted health care policies necessary to ensure that patients receive appropriate health care delivery in an efficient and cost-effective manner. The purpose of this report is to assess those factors resulting in increased demand for emergency department services in Ontario. We assess how utilization rates vary according to the severity of patient presentation in the emergency department. We are specifically interested in the impact that access to primary care physicians has on the demand for emergency department services. Additionally, we wish to investigate these trends using a series of novel regression models for count outcomes which have yet to be employed in the domain of emergency medical research. Data regarding the frequency of emergency department visits for the respondents of Canadian Community Health Survey (CCHS) during our study interval (2003-2005) are obtained from the National Ambulatory Care Reporting System (NACRS). Patients' emergency department utilizations were linked with information from the Canadian Community Health Survey (CCHS) which provides individual level medical, socio-demographic, psychological and behavioral information for investigating predictors of increased emergency department utilization. Six different multiple regression models for count data were fitted to assess the influence of predictors on demand for emergency department services, including: Poisson, Negative Binomial, Zero-Inflated Poisson, Zero-Inflated Negative Binomial, Hurdle Poisson, and Hurdle Negative Binomial. Comparison of competing models was assessed by the Vuong test statistic. The CCHS cycle 2.1 respondents were a roughly equal mix of males (50.4%) and females (49.6%). The majority (86.2%) were young-middle aged adults between the ages of 20-64, living in predominantly urban environments (85.9%), with mid-high household incomes (92.2%) and well-educated, receiving at least a high-school diploma (84.1%). Many participants reported no chronic disease (51.9%), fell into a small number (0-5) of ambulatory diagnostic groups (62.3%), and perceived their health status as good/excellent (88.1%); however, were projected to have high Resource Utilization Band levels of health resource utilization (68.2%). These factors were largely stable for CCHS cycle 3.1 respondents. Factors influencing demand for emergency department services varied according to the severity of triage scores at initial presentation. For example, although a non-significant predictor of the odds of emergency department utilization in high severity cases, access to a primary care physician was a statistically significant predictor of the likelihood of emergency department utilization (OR: 0.69; 95% CI OR: 0.63-0.75) and the rate of emergency department utilization (RR: 0.57; 95% CI RR: 0.50-0.66) in low severity cases. Using a theoretically appropriate hurdle negative binomial regression model this unique study illustrates that access to a primary care physician is an important predictor of both the odds and rate of emergency department utilization in Ontario. Restructuring primary care services, with aims of increasing access to undersupplied populations may result in decreased emergency department utilization rates by approximately 43% for low severity triage level cases.
Inflation without inflaton: A model for dark energy
NASA Astrophysics Data System (ADS)
Falomir, H.; Gamboa, J.; Méndez, F.; Gondolo, P.
2017-10-01
The interaction between two initially causally disconnected regions of the Universe is studied using analogies of noncommutative quantum mechanics and the deformation of Poisson manifolds. These causally disconnect regions are governed by two independent Friedmann-Lemaître-Robertson-Walker (FLRW) metrics with scale factors a and b and cosmological constants Λa and Λb, respectively. The causality is turned on by positing a nontrivial Poisson bracket [Pα,Pβ]=ɛα βκ/G , where G is Newton's gravitational constant and κ is a dimensionless parameter. The posited deformed Poisson bracket has an interpretation in terms of 3-cocycles, anomalies, and Poissonian manifolds. The modified FLRW equations acquire an energy-momentum tensor from which we explicitly obtain the equation of state parameter. The modified FLRW equations are solved numerically and the solutions are inflationary or oscillating depending on the values of κ . In this model, the accelerating and decelerating regime may be periodic. The analysis of the equation of state clearly shows the presence of dark energy. By completeness, the perturbative solution for κ ≪1 is also studied.
Rubikowska, Barbara; Bratkowski, Jakub; Ustrnul, Zbigniew; Vanwambeke, Sophie O.
2018-01-01
During 1999–2012, 77% of the cases of tick-borne encephalitis (TBE) were recorded in two out of 16 Polish provinces. However, historical data, mostly from national serosurveys, suggest that the disease could be undetected in many areas. The aim of this study was to identify which routinely-measured meteorological, environmental, and socio-economic factors are associated to TBE human risk across Poland, with a particular focus on areas reporting few cases, but where serosurveys suggest higher incidence. We fitted a zero-inflated Poisson model using data on TBE incidence recorded in 108 NUTS-5 administrative units in high-risk areas over the period 1999–2012. Subsequently we applied the best fitting model to all Polish municipalities. Keeping the remaining variables constant, the predicted rate increased with the increase of air temperature over the previous 10–20 days, precipitation over the previous 20–30 days, in forestation, forest edge density, forest road density, and unemployment. The predicted rate decreased with increasing distance from forests. The map of predicted rates was consistent with the established risk areas. It predicted, however, high rates in provinces considered TBE-free. We recommend raising awareness among physicians working in the predicted high-risk areas and considering routine use of household animal surveys for risk mapping. PMID:29617333
Stefanoff, Pawel; Rubikowska, Barbara; Bratkowski, Jakub; Ustrnul, Zbigniew; Vanwambeke, Sophie O; Rosinska, Magdalena
2018-04-04
During 1999–2012, 77% of the cases of tick-borne encephalitis (TBE) were recorded in two out of 16 Polish provinces. However, historical data, mostly from national serosurveys, suggest that the disease could be undetected in many areas. The aim of this study was to identify which routinely-measured meteorological, environmental, and socio-economic factors are associated to TBE human risk across Poland, with a particular focus on areas reporting few cases, but where serosurveys suggest higher incidence. We fitted a zero-inflated Poisson model using data on TBE incidence recorded in 108 NUTS-5 administrative units in high-risk areas over the period 1999–2012. Subsequently we applied the best fitting model to all Polish municipalities. Keeping the remaining variables constant, the predicted rate increased with the increase of air temperature over the previous 10–20 days, precipitation over the previous 20–30 days, in forestation, forest edge density, forest road density, and unemployment. The predicted rate decreased with increasing distance from forests. The map of predicted rates was consistent with the established risk areas. It predicted, however, high rates in provinces considered TBE-free. We recommend raising awareness among physicians working in the predicted high-risk areas and considering routine use of household animal surveys for risk mapping.
DEsingle for detecting three types of differential expression in single-cell RNA-seq data.
Miao, Zhun; Deng, Ke; Wang, Xiaowo; Zhang, Xuegong
2018-04-24
The excessive amount of zeros in single-cell RNA-seq data include "real" zeros due to the on-off nature of gene transcription in single cells and "dropout" zeros due to technical reasons. Existing differential expression (DE) analysis methods cannot distinguish these two types of zeros. We developed an R package DEsingle which employed Zero-Inflated Negative Binomial model to estimate the proportion of real and dropout zeros and to define and detect 3 types of DE genes in single-cell RNA-seq data with higher accuracy. The R package DEsingle is freely available at https://github.com/miaozhun/DEsingle and is under Bioconductor's consideration now. zhangxg@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online.
Neelon, Brian; O'Malley, A James; Smith, Valerie A
2016-11-30
This article is the second installment of a two-part tutorial on the analysis of zero-modified count and semicontinuous data. Part 1, which appears as a companion piece in this issue of Statistics in Medicine, provides a general background and overview of the topic, with particular emphasis on applications to health services research. Here, we present three case studies highlighting various approaches for the analysis of zero-modified data. The first case study describes methods for analyzing zero-inflated longitudinal count data. Case study 2 considers the use of hurdle models for the analysis of spatiotemporal count data. The third case study discusses an application of marginalized two-part models to the analysis of semicontinuous health expenditure data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J
2018-02-20
We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in handling the ordinal and proportional variables are addressed using a quasi-likelihood type approximation. We develop an efficient algorithm to fit the model that also involves the selection of the number of principal components. The method is applied to physical activity data and is evaluated empirically by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
Toyinbo, Oluyemi; Matilainen, Markus; Turunen, Mari; Putus, Tuula; Shaughnessy, Richard; Haverinen-Shaughnessy, Ulla
2016-03-30
The aim of this paper was to examine associations between school building characteristics, indoor environmental quality (IEQ), and health responses using questionnaire data from both school principals and students. From 334 randomly sampled schools, 4248 sixth grade students from 297 schools participated in a questionnaire. From these schools, 134 principals returned questionnaires concerning 51 IEQ related questions of their school. Generalized linear mixed models (GLMM) were used to study the associations between IEQ indicators and existence of self-reported upper respiratory symptoms, while hierarchical Zero Inflated Poisson (ZIP)-models were used to model the number of symptoms. Significant associations were established between existence of upper respiratory symptoms and unsatisfactory classroom temperature during the heating season (ORs 1.45 for too hot and cold, and 1.27 for too cold as compared to satisfactory temperature) and dampness or moisture damage during the year 2006-2007 (OR: 1.80 as compared to no moisture damage), respectively. The number of upper respiratory symptoms was significantly associated with inadequate ventilation and dampness or moisture damage. A higher number of missed school days due to respiratory infections were reported in schools with inadequate ventilation (RR: 1.16). The school level IEQ indicator variables described in this paper could explain a relatively large part of the school level variation observed in the self-reported upper respiratory symptoms and missed school days due to respiratory infections among students.
Toyinbo, Oluyemi; Matilainen, Markus; Turunen, Mari; Putus, Tuula; Shaughnessy, Richard; Haverinen-Shaughnessy, Ulla
2016-01-01
Background: The aim of this paper was to examine associations between school building characteristics, indoor environmental quality (IEQ), and health responses using questionnaire data from both school principals and students. Methods: From 334 randomly sampled schools, 4248 sixth grade students from 297 schools participated in a questionnaire. From these schools, 134 principals returned questionnaires concerning 51 IEQ related questions of their school. Generalized linear mixed models (GLMM) were used to study the associations between IEQ indicators and existence of self-reported upper respiratory symptoms, while hierarchical Zero Inflated Poisson (ZIP)—models were used to model the number of symptoms. Results: Significant associations were established between existence of upper respiratory symptoms and unsatisfactory classroom temperature during the heating season (ORs 1.45 for too hot and cold, and 1.27 for too cold as compared to satisfactory temperature) and dampness or moisture damage during the year 2006–2007 (OR: 1.80 as compared to no moisture damage), respectively. The number of upper respiratory symptoms was significantly associated with inadequate ventilation and dampness or moisture damage. A higher number of missed school days due to respiratory infections were reported in schools with inadequate ventilation (RR: 1.16). Conclusions: The school level IEQ indicator variables described in this paper could explain a relatively large part of the school level variation observed in the self-reported upper respiratory symptoms and missed school days due to respiratory infections among students. PMID:27043595
Poisson's ratio from polarization of acoustic zero-group velocity Lamb mode.
Baggens, Oskar; Ryden, Nils
2015-07-01
Poisson's ratio of an isotropic and free elastic plate is estimated from the polarization of the first symmetric acoustic zero-group velocity Lamb mode. This polarization is interpreted as the ratio of the absolute amplitudes of the surface normal and surface in-plane components of the acoustic mode. Results from the evaluation of simulated datasets indicate that the presented relation, which links the polarization and Poisson's ratio, can be extended to incorporate plates with material damping. Furthermore, the proposed application of the polarization is demonstrated in a practical field case, where an increased accuracy of estimated nominal thickness is obtained.
Goetzel, Ron Z; Gibson, Teresa B; Short, Meghan E; Chu, Bong-Chul; Waddell, Jessica; Bowen, Jennie; Lemon, Stephenie C; Fernandez, Isabel Diana; Ozminkowski, Ronald J; Wilson, Mark G; DeJoy, David M
2010-01-01
The relationships between worker health and productivity are becoming clearer. However, few large scale studies have measured the direct and indirect cost burden of overweight and obesity among employees using actual biometric values. The objective of this study was to quantify the direct medical and indirect (absence and productivity) cost burden of overweight and obesity in workers. A cross-sectional study of 10,026 employees in multiple professions and worksites across the United States was conducted. The main outcomes were five self-reported measures of workers' annual health care use and productivity: doctor visits, emergency department visits, hospitalizations, absenteeism (days absent from work), and presenteeism (percent on-the-job productivity losses). Multivariate count and continuous data models (Poisson, negative binomial, and zero-inflated Poisson) were estimated. After adjusting for covariates, obese employees had 20% higher doctor visits than normal weight employees (confidence interval [CI] 16%, 24%, P < 0.01) and 26% higher emergency department visits (CI 11%, 42%, P < 0.01). Rates of doctor and emergency department visits for overweight employees were no different than those of normal weight employees. Compared to normal weight employees, presenteeism rates were 10% and 12% higher for overweight and obese employees, respectively (CI 5%, 15% and 5%, 19%, all P < 0.01). Taken together, compared to normal weight employees, obese and overweight workers were estimated to cost employers $644 and $201 more per employee per year, respectively. This study provides evidence that employers face a financial burden imposed by obesity. Implementation of effective workplace programs for the prevention and management of excess weight will benefit employers and their workers.
Hughes, Kristen; Budke, Christine M.; Ward, Michael P.; Kerry, Ruth; Ingram, Ben
2017-01-01
The population density of wildlife reservoirs contributes to disease transmission risk for domestic animals. The objective of this study was to model the African buffalo distribution of the Kruger National Park. A secondary objective was to collect field data to evaluate models and determine environmental predictors of buffalo detection. Spatial distribution models were created using buffalo census information and archived data from previous research. Field data were collected during the dry (August 2012) and wet (January 2013) seasons using a random walk design. The fit of the prediction models were assessed descriptively and formally by calculating the root mean square error (rMSE) of deviations from field observations. Logistic regression was used to estimate the effects of environmental variables on the detection of buffalo herds and linear regression was used to identify predictors of larger herd sizes. A zero-inflated Poisson model produced distributions that were most consistent with expected buffalo behavior. Field data confirmed that environmental factors including season (P = 0.008), vegetation type (P = 0.002), and vegetation density (P = 0.010) were significant predictors of buffalo detection. Bachelor herds were more likely to be detected in dense vegetation (P = 0.005) and during the wet season (P = 0.022) compared to the larger mixed-sex herds. Static distribution models for African buffalo can produce biologically reasonable results but environmental factors have significant effects and therefore could be used to improve model performance. Accurate distribution models are critical for the evaluation of disease risk and to model disease transmission. PMID:28902858
An application of a zero-inflated lifetime distribution with multiple and incomplete data sources
Hamada, M. S.; Margevicius, K. J.
2016-02-11
In this study, we analyze data sampled from a population of parts in which an associated anomaly can occur at assembly or after assembly. Using a zero-inflated lifetime distribution to fit left-censored and right-censored data as well data from a supplementary sample, we make predictions about the proportion of the population with anomalies today and in the future. Goodness-of-fit is also addressed.
Quasi-neutral limit of Euler–Poisson system of compressible fluids coupled to a magnetic field
NASA Astrophysics Data System (ADS)
Yang, Jianwei
2018-06-01
In this paper, we consider the quasi-neutral limit of a three-dimensional Euler-Poisson system of compressible fluids coupled to a magnetic field. We prove that, as Debye length tends to zero, periodic initial-value problems of the model have unique smooth solutions existing in the time interval where the ideal incompressible magnetohydrodynamic equations has smooth solution. Meanwhile, it is proved that smooth solutions converge to solutions of incompressible magnetohydrodynamic equations with a sharp convergence rate in the process of quasi-neutral limit.
Guillén, Montserrat; Crimmins, Eileen M.
2013-01-01
Differences in health care utilization of immigrants 50 years of age and older relative to the native-born populations in eleven European countries are investigated. Negative binomial and zero-inflated Poisson regression are used to examine differences between immigrants and native-borns in number of doctor visits, visits to general practitioners, and hospital stays using the 2004 Survey of Health, Ageing, and Retirement in Europe database. In the pooled European sample and in some individual countries, older immigrants use from 13 to 20% more health services than native-borns after demographic characteristics are controlled. After controlling for the need for health care, differences between immigrants and native-borns in the use of physicians, but not hospitals, are reduced by about half. These are not changed much with the incorporation of indicators of socioeconomic status and extra insurance coverage. Higher country-level relative expenditures on health, paying physicians a fee-for-service, and physician density are associated with higher usage of physician services among immigrants. PMID:21660564
NASA Astrophysics Data System (ADS)
García-Bellido, Juan; Garriga, Jaume; Montes, Xavier
1998-04-01
We show that a large class of two-field models of single-bubble open inflation does not lead to infinite open universes, as was previously thought, but to an ensemble of very large but finite inflating ``islands.'' The reason is that the quantum tunneling responsible for the nucleation of the bubble does not occur simultaneously along both field directions and equal-time hypersurfaces in the open universe are not synchronized with equal-density or fixed-field hypersurfaces. The most probable tunneling trajectory corresponds to a zero value of the inflaton field; large values, necessary for the second period of inflation inside the bubble, only arise as localized fluctuations. The interior of each nucleated bubble will contain an infinite number of such inflating regions of comoving size of order γ-1, where γ is the supercurvature eigenvalue, which depends on the parameters of the model. Each one of these islands will be a quasi-open universe. Since the volume of the hyperboloid is infinite, inflating islands with all possible values of the field at their center will be realized inside of a single bubble. We may happen to live in one of those patches of comoving size d<~γ-1, where the universe appears to be open. In particular, we consider the ``supernatural'' model proposed by Linde and Mezhlumian. There, an approximate U(1) symmetry is broken by a tunneling field in a first order phase transition, and slow-roll inflation inside the nucleated bubble is driven by the pseudo Goldstone field. We find that the excitations of the pseudo Goldstone field produced by the nucleation and subsequent expansion of the bubble place severe constraints on this model. We also discuss the coupled and uncoupled two-field models.
Wu, Yingpeng; Yi, Ningbo; Huang, Lu; Zhang, Tengfei; Fang, Shaoli; Chang, Huicong; Li, Na; Oh, Jiyoung; Lee, Jae Ah; Kozlov, Mikhail; Chipara, Alin C; Terrones, Humberto; Xiao, Peishuang; Long, Guankui; Huang, Yi; Zhang, Fan; Zhang, Long; Lepró, Xavier; Haines, Carter; Lima, Márcio Dias; Lopez, Nestor Perea; Rajukumar, Lakshmy P; Elias, Ana L; Feng, Simin; Kim, Seon Jeong; Narayanan, N T; Ajayan, Pulickel M; Terrones, Mauricio; Aliev, Ali; Chu, Pengfei; Zhang, Zhong; Baughman, Ray H; Chen, Yongsheng
2015-01-20
It is a challenge to fabricate graphene bulk materials with properties arising from the nature of individual graphene sheets, and which assemble into monolithic three-dimensional structures. Here we report the scalable self-assembly of randomly oriented graphene sheets into additive-free, essentially homogenous graphene sponge materials that provide a combination of both cork-like and rubber-like properties. These graphene sponges, with densities similar to air, display Poisson's ratios in all directions that are near-zero and largely strain-independent during reversible compression to giant strains. And at the same time, they function as enthalpic rubbers, which can recover up to 98% compression in air and 90% in liquids, and operate between -196 and 900 °C. Furthermore, these sponges provide reversible liquid absorption for hundreds of cycles and then discharge it within seconds, while still providing an effective near-zero Poisson's ratio.
Yes, the GIGP Really Does Work--And Is Workable!
ERIC Educational Resources Information Center
Burrell, Quentin L.; Fenton, Michael R.
1993-01-01
Discusses the generalized inverse Gaussian-Poisson (GIGP) process for informetric modeling. Negative binomial distribution is discussed, construction of the GIGP process is explained, zero-truncated GIGP is considered, and applications of the process with journals, library circulation statistics, and database index terms are described. (50…
Yates, Tuppett M.; Luthar, Suniya S.; Tracy, Allison J.
2015-01-01
This investigation examined process-level pathways to nonsuicidal self-injury (NSSI; e.g., self-cutting, -burning, -hitting) in 2 cohorts of suburban, upper-middle-class youths: a cross-sectional sample of 9th–12th graders (n = 1,036, 51.9% girls) on the West Coast and a longitudinal sample followed annually from the 6th through 12th grades (n = 245, 53.1% girls) on the East Coast. High rates of NSSI were found in both the cross-sectional (37.2%) and the longitudinal (26.1%) samples. Zero-inflated Poisson regression models estimated process-level pathways from perceived parental criticism to NSSI via youth-reported alienation toward parents. Pathways toward the initiation of NSSI were distinct from those accounting for its frequency. Parental criticism was associated with increased NSSI, and youth alienation toward parents emerged as a relevant process underlying this pathway, particularly for boys. The specificity of these pathways was explored by examining separate trajectories toward delinquent outcomes. The findings illustrate the prominence of NSSI among “privileged” youths, the salience of the caregiving environment in NSSI, the importance of parental alienation in explaining these relations, and the value of incorporating multiple systems in treatment approaches for adolescents who self-injure. PMID:18229983
Screening adolescents in the emergency department for weapon carriage.
Cunningham, Rebecca M; Resko, Stella M; Harrison, Stephanie Roahen; Zimmerman, Marc; Stanley, Rachel; Chermack, Stephen T; Walton, Maureen A
2010-02-01
The objective was to describe the prevalence and correlates of past-year weapon involvement among adolescents seeking care in an inner-city emergency department (ED). This cross-sectional study administered a computerized survey to all eligible adolescents (age 14-18 years), 7 days a week, who were seeking care over an 18-month period at an inner-city Level 1 ED. Validated measures were administered, including measures of demographics, sexual activity, substance use, injury, violent behavior, weapon carriage, and/or weapon use. Zero-inflated Poisson (ZIP) regression models were used to identify correlates of the occurrence and past-year frequency of these weapons variables. Adolescents (n = 2069, 86% response rate) completed the computerized survey. Fifty-five percent were female; 56.5% were African American. In the past year, 20% of adolescents reported knife or razor carriage, 7% reported gun carriage, and 6% pulled a knife or gun on someone. Although gun carriage was more frequent among males, females were as likely to carry a knife or pull a weapon in the past year. One-fifth of all adolescents seeking care in this inner-city ED have carried a weapon. Understanding weapon carriage among teens seeking ED care is a critical first step to future ED-based injury prevention initiatives. (c) 2010 by the Society for Academic Emergency Medicine.
Guillas, Serge; Day, Simon J; McGuire, B
2010-05-28
We present statistical evidence for a temporal link between variations in the El Niño-Southern Oscillation (ENSO) and the occurrence of earthquakes on the East Pacific Rise (EPR). We adopt a zero-inflated Poisson regression model to represent the relationship between the number of earthquakes in the Easter microplate on the EPR and ENSO (expressed using the southern oscillation index (SOI) for east Pacific sea-level pressure anomalies) from February 1973 to February 2009. We also examine the relationship between the numbers of earthquakes and sea levels, as retrieved by Topex/Poseidon from October 1992 to July 2002. We observe a significant (95% confidence level) positive influence of SOI on seismicity: positive SOI values trigger more earthquakes over the following 2 to 6 months than negative SOI values. There is a significant negative influence of absolute sea levels on seismicity (at 6 months lag). We propose that increased seismicity is associated with ENSO-driven sea-surface gradients (rising from east to west) in the equatorial Pacific, leading to a reduction in ocean-bottom pressure over the EPR by a few kilopascal. This relationship is opposite to reservoir-triggered seismicity and suggests that EPR fault activity may be triggered by plate flexure associated with the reduced pressure.
Alcock, Ian; White, Mathew P; Taylor, Tim; Coldwell, Deborah F; Gribble, Matthew O; Evans, Karl L; Corner, Adam; Vardoulakis, Sotiris; Fleming, Lora E
2017-01-01
The rise in greenhouse gas emissions from air travel could be reduced by individuals voluntarily abstaining from, or reducing, flights for leisure and recreational purposes. In theory, we might expect that people with pro-environmental value orientations and concerns about the risks of climate change, and those who engage in more pro-environmental household behaviours, would also be more likely to abstain from such voluntary air travel, or at least to fly less far. Analysis of two large datasets from the United Kingdom, weighted to be representative of the whole population, tested these associations. Using zero-inflated Poisson regression models, we found that, after accounting for potential confounders, there was no association between individuals' environmental attitudes, concern over climate change, or their routine pro-environmental household behaviours, and either their propensity to take non-work related flights, or the distances flown by those who do so. These findings contrasted with those for pro-environmental household behaviours, where associations with environmental attitudes and concern were observed. Our results offer little encouragement for policies aiming to reduce discretionary air travel through pro-environmental advocacy, or through 'spill-over' from interventions to improve environmental impacts of household routines.
Prescription Drug Misuse and Sexual Behavior among Young Adults
Wells, Brooke E.; Kelly, Brian C.; Rendina, H. Jonathon; Parsons, Jeffrey T.
2015-01-01
Though research indicates a complex link between substance use and sexual risk behavior, there is limited research on the association between sexual risk behavior and prescription drug misuse. In light of the alarming increases in prescription drug misuse and the role of demographic characteristics in sexual risk behavior and outcomes, the current study examines demographic differences (gender, sexual identity, age, relationship status, parental class background, and race/ethnicity) in sexual risk behavior, sexual behavior under the influence of prescription drugs, and sexual risk behavior under the influence of prescription drugs in a sample of 402 young adults (18–29) who misuse prescription drugs. Nearly half of the sexually active young adult prescription drug misusers in this sample reported recent sex under the influence of prescription drugs, more than three quarters reported recent sex without a condom, and more than one-third reported recent sex without a condom after using prescription drugs. Zero-inflated Poisson regression models indicated that white race, younger age, higher parental class, and being a heterosexual man were all associated with sexual risk behavior, sex under the influence of prescription drugs, and sexual risk under the influence of prescription drugs. Findings have implications for the targeting of prevention and intervention efforts. PMID:25569204
White, Mathew P.; Taylor, Tim; Coldwell, Deborah F.; Gribble, Matthew O.; Evans, Karl L.; Corner, Adam; Vardoulakis, Sotiris; Fleming, Lora E.
2017-01-01
The rise in greenhouse gas emissions from air travel could be reduced by individuals voluntarily abstaining from, or reducing, flights for leisure and recreational purposes. In theory, we might expect that people with pro-environmental value orientations and concerns about the risks of climate change, and those who engage in more pro-environmental household behaviours, would also be more likely to abstain from such voluntary air travel, or at least to fly less far. Analysis of two large datasets from the United Kingdom, weighted to be representative of the whole population, tested these associations. Using zero-inflated Poisson regression models, we found that, after accounting for potential confounders, there was no association between individuals' environmental attitudes, concern over climate change, or their routine pro-environmental household behaviours, and either their propensity to take non-work related flights, or the distances flown by those who do so. These findings contrasted with those for pro-environmental household behaviours, where associations with environmental attitudes and concern were observed. Our results offer little encouragement for policies aiming to reduce discretionary air travel through pro-environmental advocacy, or through ‘spill-over’ from interventions to improve environmental impacts of household routines. PMID:28367001
Yates, Tuppett M; Tracy, Allison J; Luthar, Suniya S
2008-02-01
This investigation examined process-level pathways to nonsuicidal self-injury (NSSI; e.g., self-cutting, -burning, -hitting) in 2 cohorts of suburban, upper-middle-class youths: a cross-sectional sample of 9th-12th graders (n = 1,036, 51.9% girls) on the West Coast and a longitudinal sample followed annually from the 6th through 12th grades (n = 245, 53.1% girls) on the East Coast. High rates of NSSI were found in both the cross-sectional (37.2%) and the longitudinal (26.1%) samples. Zero-inflated Poisson regression models estimated process-level pathways from perceived parental criticism to NSSI via youth-reported alienation toward parents. Pathways toward the initiation of NSSI were distinct from those accounting for its frequency. Parental criticism was associated with increased NSSI, and youth alienation toward parents emerged as a relevant process underlying this pathway, particularly for boys. The specificity of these pathways was explored by examining separate trajectories toward delinquent outcomes. The findings illustrate the prominence of NSSI among "privileged" youths, the salience of the caregiving environment in NSSI, the importance of parental alienation in explaining these relations, and the value of incorporating multiple systems in treatment approaches for adolescents who self-injure.
Inflatable Re-Entry Vehicle Experiment (IRVE) Design Overview
NASA Technical Reports Server (NTRS)
Hughes, Stephen J.; Dillman, Robert A.; Starr, Brett R.; Stephan, Ryan A.; Lindell, Michael C.; Player, Charles J.; Cheatwood, F. McNeil
2005-01-01
Inflatable aeroshells offer several advantages over traditional rigid aeroshells for atmospheric entry. Inflatables offer increased payload volume fraction of the launch vehicle shroud and the possibility to deliver more payload mass to the surface for equivalent trajectory constraints. An inflatable s diameter is not constrained by the launch vehicle shroud. The resultant larger drag area can provide deceleration equivalent to a rigid system at higher atmospheric altitudes, thus offering access to higher landing sites. When stowed for launch and cruise, inflatable aeroshells allow access to the payload after the vehicle is integrated for launch and offer direct access to vehicle structure for structural attachment with the launch vehicle. They also offer an opportunity to eliminate system duplication between the cruise stage and entry vehicle. There are however several potential technical challenges for inflatable aeroshells. First and foremost is the fact that they are flexible structures. That flexibility could lead to unpredictable drag performance or an aerostructural dynamic instability. In addition, durability of large inflatable structures may limit their application. They are susceptible to puncture, a potentially catastrophic insult, from many possible sources. Finally, aerothermal heating during planetary entry poses a significant challenge to a thin membrane. NASA Langley Research Center and NASA's Wallops Flight Facility are jointly developing inflatable aeroshell technology for use on future NASA missions. The technology will be demonstrated in the Inflatable Re-entry Vehicle Experiment (IRVE). This paper will detail the development of the initial IRVE inflatable system to be launched on a Terrier/Orion sounding rocket in the fourth quarter of CY2005. The experiment will demonstrate achievable packaging efficiency of the inflatable aeroshell for launch, inflation, leak performance of the inflatable system throughout the flight regime, structural integrity when exposed to a relevant dynamic pressure and aerodynamic stability of the inflatable system. Structural integrity and structural response of the inflatable will be verified with photogrammetric measurements of the back side of the aeroshell in flight. Aerodynamic stability as well as drag performance will be verified with on board inertial measurements and radar tracking from multiple ground radar stations. The experiment will yield valuable information about zero-g vacuum deployment dynamics of the flexible inflatable structure with both inertial and photographic measurements. In addition to demonstrating inflatable technology, IRVE will validate structural, aerothermal, and trajectory modeling techniques for the inflatable. Structural response determined from photogrammetrics will validate structural models, skin temperature measurements and additional in-depth temperature measurements will validate material thermal performance models, and on board inertial measurements along with radar tracking from multiple ground radar stations will validate trajectory simulation models.
Personality disorder risk factors for suicide attempts over 10 years of follow-up.
Ansell, Emily B; Wright, Aidan G C; Markowitz, John C; Sanislow, Charles A; Hopwood, Christopher J; Zanarini, Mary C; Yen, Shirley; Pinto, Anthony; McGlashan, Thomas H; Grilo, Carlos M
2015-04-01
Identifying personality disorder (PD) risk factors for suicide attempts is an important consideration for research and clinical care alike. However, most prior research has focused on single PDs or categorical PD diagnoses without considering unique influences of different PDs or of severity (sum) of PD criteria on the risk for suicide-related outcomes. This has usually been done with cross-sectional or retrospective assessment methods. Rarely are dimensional models of PDs examined in longitudinal, naturalistic prospective designs. In addition, it is important to consider divergent risk factors in predicting the risk of ever making a suicide attempt versus the risk of making an increasing number of attempts within the same model. This study examined 431 participants who were followed for 10 years in the Collaborative Longitudinal Personality Disorders Study. Baseline assessments of personality disorder criteria were summed as dimensional counts of personality pathology and examined as predictors of suicide attempts reported at annual interviews throughout the 10-year follow-up period. We used univariate and multivariate zero-inflated Poisson regression models to simultaneously evaluate PD risk factors for ever attempting suicide and for increasing numbers of attempts among attempters. Consistent with prior research, borderline PD was uniquely associated with ever attempting. However, only narcissistic PD was uniquely associated with an increasing number of attempts. These findings highlight the relevance of both borderline and narcissistic personality pathology as unique contributors to suicide-related outcomes. (c) 2015 APA, all rights reserved).
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan
2014-09-01
Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.
Parks, Michael J; Kingsbury, John H; Boyle, Raymond G; Evered, Sharrilyn
2018-01-01
This study addresses the dearth of population-based research on how comprehensive household smoke-free rules (ie, in the home and car) relate to tobacco use and secondhand smoke (SHS) exposure among adolescents. Analysis of 2014 Minnesota Youth Tobacco Survey. Representative sample of Minnesota youth. A total of 1287 youth who lived with a smoker. Measures included household smoke-free rules (no rules, partial rules-home or car, but not both-and comprehensive rules), lifetime and 30-day cigarette use, 30-day cigarette and other product use, and SHS exposure in past 7 days in home and car. Weighted multivariate logistic, zero-inflated Poisson, and zero-inflated negative binomial regressions were used. Compared to comprehensive rules, partial and no smoke-free rules were significantly and positively related to lifetime cigarette use (respectively, adjusted odds ratio [AOR] = 1.80, 95% confidence interval [CI] = 1.24-2.61; AOR = 2.87, 95% CI = 1.93-4.25), and a similar significant pattern was found for 30-day cigarette use (respectively, AOR = 2.20, 95% CI = 1.21-4.02; AOR = 2.45, 95% CI = 1.34-4.50). No smoke-free rules significantly predicted using cigarettes and other tobacco products compared to comprehensive rules. In both descriptive and regression analyses, we found SHS exposure rates in both the home and car were significantly lower among youth whose household implemented comprehensive smoke-free rules. Comprehensive smoke-free rules protect youth from the harms of caregiver tobacco use. Relative to both partial and no smoke-free rules, comprehensive smoke-free rules have a marked impact on tobacco use and SHS exposure among youth who live with a smoker. Health promotion efforts should promote comprehensive smoke-free rules among all households and particularly households with children and adolescents.
NASA Astrophysics Data System (ADS)
Raghunath, Ganesh
Iron-Gallium alloy (Galfenol) is a magnetostrictive smart material (lambdasat ˜400 ppm) with potential for robust transduction owing to good magneto-mechanical coupling and useful mechanical properties. In addition, Galfenol exhibits a highly negative Poisson's ratio (denoted by nu) along the crystallographic directions on {100} planes with nu values of as low as -0.7 under tensile loads. Consequently, their samples become wider when elongated and narrower when compressed (aka auxeticity). This is an anisotropic, in-plane and volume conserving phenomenon with compensating contractions and expansions in the third (out of plane) direction. Since there is good magneto-elastic coupling in Galfenol, a negative Poisson's ratio is expected to be observed under application of magnetic fields even under zero stress conditions. This work deals with systematically studying the magneto-elastic contributions in Galfenol samples between 12 and 33 atomic percent Ga as a non-synthetic (no artificial linkages, unlike foams) 'structural auxetic' material, capable of bearing loads. This investigation addresses the profound gap in understanding this atypical behavior using empirical data supported by analytical modeling from first principles to predict the Poisson's ratio at magnetic saturation, multi-physics finite element simulations to determine the trends in the strains along the {100} directions and magnetic domain imaging to explain the mechanical response from a magnetic domain perspective. The outcome of this effort will help comprehend the association between anisotropic magnetic and mechanical energies and hence the magnetic contributions to the atomic level interactions that are the origins of this magneto-auxetic characteristic. Also, it is well established that a number of mechanical properties such as shear resistance and toughness depend on the value of Poisson's ratio. There is a slight increase in these mechanical properties with non-zero nu values, but as we enter the highly auxetic regime (nu<-0.5), these values increase by magnitudes. Hence, the possibility of nu values approaching -1.0 under applied magnetic fields at zero stress is extremely intriguing, as these properties can be much larger than is possible in conventional materials. This has potential for several novel applications where the value of Poisson's ratio can be magnetically tuned to keep it near -1 under applied stresses.
Computational prediction of new auxetic materials.
Dagdelen, John; Montoya, Joseph; de Jong, Maarten; Persson, Kristin
2017-08-22
Auxetics comprise a rare family of materials that manifest negative Poisson's ratio, which causes an expansion instead of contraction under tension. Most known homogeneously auxetic materials are porous foams or artificial macrostructures and there are few examples of inorganic materials that exhibit this behavior as polycrystalline solids. It is now possible to accelerate the discovery of materials with target properties, such as auxetics, using high-throughput computations, open databases, and efficient search algorithms. Candidates exhibiting features correlating with auxetic behavior were chosen from the set of more than 67 000 materials in the Materials Project database. Poisson's ratios were derived from the calculated elastic tensor of each material in this reduced set of compounds. We report that this strategy results in the prediction of three previously unidentified homogeneously auxetic materials as well as a number of compounds with a near-zero homogeneous Poisson's ratio, which are here denoted "anepirretic materials".There are very few inorganic materials with auxetic homogenous Poisson's ratio in polycrystalline form. Here authors develop an approach to screening materials databases for target properties such as negative Poisson's ratio by using stability and structural motifs to predict new instances of homogenous auxetic behavior as well as a number of materials with near-zero Poisson's ratio.
Sheu, Mei-Ling; Hu, Teh-Wei; Keeler, Theodore E; Ong, Michael; Sung, Hai-Yen
2004-08-01
The objective of this paper is to determine the price sensitivity of smokers in their consumption of cigarettes, using evidence from a major increase in California cigarette prices due to Proposition 10 and the Tobacco Settlement. The study sample consists of individual survey data from Behavioral Risk Factor Survey (BRFS) and price data from the Bureau of Labor Statistics between 1996 and 1999. A zero-inflated negative binomial (ZINB) regression model was applied for the statistical analysis. The statistical model showed that price did not have an effect on reducing the estimated prevalence of smoking. However, it indicated that among smokers the price elasticity was at the level of -0.46 and statistically significant. Since smoking prevalence is significantly lower than it was a decade ago, price increases are becoming less effective as an inducement for hard-core smokers to quit, although they may respond by decreasing consumption. For those who only smoke occasionally (many of them being young adults) price increases alone may not be an effective inducement to quit smoking. Additional underlying behavioral factors need to be identified so that more effective anti-smoking strategies can be developed.
Two dimensional analytical model for a reconfigurable field effect transistor
NASA Astrophysics Data System (ADS)
Ranjith, R.; Jayachandran, Remya; Suja, K. J.; Komaragiri, Rama S.
2018-02-01
This paper presents two-dimensional potential and current models for a reconfigurable field effect transistor (RFET). Two potential models which describe subthreshold and above-threshold channel potentials are developed by solving two-dimensional (2D) Poisson's equation. In the first potential model, 2D Poisson's equation is solved by considering constant/zero charge density in the channel region of the device to get the subthreshold potential characteristics. In the second model, accumulation charge density is considered to get above-threshold potential characteristics of the device. The proposed models are applicable for the device having lightly doped or intrinsic channel. While obtaining the mathematical model, whole body area is divided into two regions: gated region and un-gated region. The analytical models are compared with technology computer-aided design (TCAD) simulation results and are in complete agreement for different lengths of the gated regions as well as at various supply voltage levels.
On the Dequantization of Fedosov's Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2003-08-01
To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.
Predicting stem borer density in maize using RapidEye data and generalized linear models
NASA Astrophysics Data System (ADS)
Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le
2017-05-01
Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.
Dark energy from gravitoelectromagnetic inflation?
NASA Astrophysics Data System (ADS)
Membiela, F. A.; Bellini, M.
2008-02-01
Gravitoectromagnetic Inflation (GI) was introduced to describe in an unified manner, electromagnetic, gravitatory and inflaton fields from a 5D vacuum state. On the other hand, the primordial origin and evolution of dark energy is today unknown. In this letter we show using GI that the zero modes of some redefined vector fields $B_i=A_i/a$ produced during inflation, could be the source of dark energy in the universe.
Deformation of a flexible disk bonded to an elastic half space-application to the lung.
Lai-Fook, S J; Hajji, M A; Wilson, T A
1980-08-01
An analysis is presented of the deformation of a homogeneous, isotropic, elastic half space subjected to a constant radial strain in a circular area on the boundary. Explicit analytic expressions for the normal and radial displacements and the shear stress on the boundary are used to interpret experiments performed on inflated pig lungs. The boundary strain was induced by inflating or deflating the lung after bonding a flexible disk to the lung surface. The prediction that the surface bulges outward for positive boundary strain and inward for negative strain was observed in the experiments. Poisson's ratio at two transpulmonary pressures was measured, by use of the normal displacement equation evaluated at the surface. A direct estimate of Poisson's ratio was possible because the normal displacement of the surface depended uniquely on the compressibility of the material. Qualitative comparisons between theory and experiment support the use of continuum analyses in evaluating the behavior of the lung parenchyma when subjected to small local distortions.
Alencar, Jeronimo; Morone, Fernanda; De Mello, Cecília Ferreira; Dégallier, Nicolas; Lucio, Paulo Sérgio; de Serra-Freire, Nicolau Maués; Guimarães, Anthony Erico
2013-07-01
In this study, the oviposition behavior of mosquito species exhibiting acrodendrophilic habits was investigated. The study was conducted near the Simplicio Hydroelectic Reservoir (SHR) located on the border of the states of Minas Gerais and Rio de Janeiro, Brazil. Samples were collected using oviposition traps installed in forest vegetation cover between 1.70 and 4.30 m above ground level during the months of April, June, August, October, and December of 2011. Haemagogus janthinomys (Dyar), Haemagogus leucocelaenus (Dyar and Shannon), Aedes albopictus (Skuse), and Aedes terrens (Walker) specimens were present among the collected samples, the first two of which being proven vectors of sylvatic yellow fever (SYF) in Brazil and the latter is a vector of dengue in mainland Asia. As the data set was zero-inflated, a specific Poisson-based model was used for the statistical analysis. When all four species were considered in the model, only heights used for egg laying and months of sampling were explaining the distribution. However, grouping the species under the genera Haemagogus Williston and Aedes Meigen showed a significant preference for higher traps of the former. Considering the local working population of SHR is very large, fluctuating, and potentially exposed to SYF, and that this virus occurs in almost all Brazilian states, monitoring of Culicidae in Brazil is essential for assessing the risk of transmission of this arbovirus.
Testing anti-smoking messages for Air Force trainees
Popova, Lucy; Linde, Brittany D.; Bursac, Zoran; Talcott, G. Wayne; Modayil, Mary V.; Little, Melissa A.; Ling, Pamela M.; Glantz, Stanton A.; Klesges, Robert C.
2015-01-01
Introduction Young adults in the military are aggressively targeted by tobacco companies and are at high risk of tobacco use. Existing anti-smoking advertisements developed for the general population might be effective in educating young adults in the military. This study evaluated the effects of different themes of existing anti-smoking advertisements on perceived harm and intentions to use cigarettes and other tobacco products among Air Force trainees. Methods In a pretest-posttest experiment, 782 Airmen were randomized to view anti-smoking advertisements in one of six conditions: anti-industry, health effects+anti-industry, sexual health, secondhand smoke, environment+anti-industry, or control. We assessed the effect of different conditions on changes in perceived harm and intentions to use cigarettes, electronic cigarettes (e-cigarettes), smokeless tobacco, hookah and cigarillos from pretest to posttest with multivariable linear regression models (perceived harm) and zero-inflated Poisson regression model (intentions). Results Anti-smoking advertisements increased perceived harm of various tobacco products and reduced intentions to use. Advertisements featuring negative effects of tobacco on health and sexual performance coupled with revealing tobacco industry manipulations had the most consistent pattern of effects on perceived harm and intentions. Conclusion Anti-smoking advertisements produced for the general public might also be effective with a young adult military population and could have spillover effects on perceptions of harm and intentions to use other tobacco products besides cigarettes. Existing anti-smoking advertising may be a cost-effective tool to educate young adults in the military. PMID:26482786
Paltto, Heidi; Nordberg, Anna; Nordén, Björn; Snäll, Tord
2011-01-01
Wooded pastures with ancient trees were formerly abundant throughout Europe, but during the last century, grazing has largely been abandoned often resulting in dense forests. Ancient trees constitute habitat for many declining and threatened species, but the effects of secondary woodland on the biodiversity associated with these trees are largely unknown. We tested for difference in species richness, occurrence, and abundance of a set of nationally and regionally red-listed epiphytic lichens between ancient oaks located in secondary woodland and ancient oaks located in open conditions. We refined the test of the effect of secondary woodland by also including other explanatory variables. Species occurrence and abundance were modelled jointly using overdispersed zero-inflated Poisson models. The richness of the red-listed lichens on ancient oaks in secondary woodland was half of that compared with oaks growing in open conditions. The species-level analyses revealed that this was mainly the result of lower occupancy of two of the study species. The tree-level abundance of one species was also lower in secondary woodland. Potential explanations for this pattern are that the study lichens are adapted to desiccating conditions enhancing their population persistence by low competition or that open, windy conditions enhance their colonisation rate. This means that the development of secondary woodland is a threat to red-listed epiphytic lichens. We therefore suggest that woody vegetation is cleared and grazing resumed in abandoned oak pastures. Importantly, this will also benefit the vitality of the oaks. PMID:21961041
Selecting exposure measures in crash rate prediction for two-lane highway segments.
Qin, Xiao; Ivan, John N; Ravishanker, Nalini
2004-03-01
A critical part of any risk assessment is identifying how to represent exposure to the risk involved. Recent research shows that the relationship between crash count and traffic volume is non-linear; consequently, a simple crash rate computed as the ratio of crash count to volume is not proper for comparing the safety of sites with different traffic volumes. To solve this problem, we describe a new approach for relating traffic volume and crash incidence. Specifically, we disaggregate crashes into four types: (1) single-vehicle, (2) multi-vehicle same direction, (3) multi-vehicle opposite direction, and (4) multi-vehicle intersecting, and define candidate exposure measures for each that we hypothesize will be linear with respect to each crash type. This paper describes initial investigation using crash and physical characteristics data for highway segments in Michigan from the Highway Safety Information System (HSIS). We use zero-inflated-Poisson (ZIP) modeling to estimate models for predicting counts for each of the above crash types as a function of the daily volume, segment length, speed limit and roadway width. We found that the relationship between crashes and the daily volume (AADT) is non-linear and varies by crash type, and is significantly different from the relationship between crashes and segment length for all crash types. Our research will provide information to improve accuracy of crash predictions and, thus, facilitate more meaningful comparison of the safety record of seemingly similar highway locations.
Anisotropic inflation with a non-minimally coupled electromagnetic field to gravity
NASA Astrophysics Data System (ADS)
Adak, Muzaffer; Akarsu, Özgür; Dereli, Tekin; Sert, Özcan
2017-11-01
We consider the non-minimal model of gravity in Y(R) F2-form. We investigate a particular case of the model, for which the higher order derivatives are eliminated but the scalar curvature R is kept to be dynamical via the constraint YRFmnFmn =-2/κ2. The effective fluid obtained can be represented by interacting electromagnetic field and vacuum depending on Y(R), namely, the energy density of the vacuum tracks R while energy density of the conventional electromagnetic field is dynamically scaled with the factor Y(R)/2. We give exact solutions for anisotropic inflation by assuming the volume scale factor of the Universe exhibits a power-law expansion. The directional scale factors do not necessarily exhibit power-law expansion, which would give rise to a constant expansion anisotropy, but expand non-trivially and give rise to a non-monotonically evolving expansion anisotropy that eventually converges to a non-zero constant. Relying on this fact, we discuss the anisotropic e-fold during the inflation by considering observed scale invariance in CMB and demanding the Universe to undergo the same amount of e-folds in all directions. We calculate the residual expansion anisotropy at the end of inflation, though as a result of non-monotonic behaviour of expansion anisotropy all the axes of the Universe undergo the same of amount of e-folds by the end of inflation. We also discuss the generation of the modified electromagnetic field during the first few e-folds of the inflation and its persistence against to the vacuum till end of inflation.
Spatio-temporal patterns of gun violence in Syracuse, New York 2009-2015.
Larsen, David A; Lane, Sandra; Jennings-Bey, Timothy; Haygood-El, Arnett; Brundage, Kim; Rubinstein, Robert A
2017-01-01
Gun violence in the United States of America is a large public health problem that disproportionately affects urban areas. The epidemiology of gun violence reflects various aspects of an infectious disease including spatial and temporal clustering. We examined the spatial and temporal trends of gun violence in Syracuse, New York, a city of 145,000. We used a spatial scan statistic to reveal spatio-temporal clusters of gunshots investigated and corroborated by Syracuse City Police Department for the years 2009-2015. We also examined predictors of areas with increased gun violence using a multi-level zero-inflated Poisson regression with data from the 2010 census. Two space-time clusters of gun violence were revealed in the city. Higher rates of segregation, poverty and the summer months were all associated with increased risk of gun violence. Previous gunshots in the area were associated with a 26.8% increase in the risk of gun violence. Gun violence in Syracuse, NY is both spatially and temporally stable, with some neighborhoods of the city greatly afflicted.
Spatio-temporal patterns of gun violence in Syracuse, New York 2009-2015
Lane, Sandra; Jennings-Bey, Timothy; Haygood-El, Arnett; Brundage, Kim; Rubinstein, Robert A.
2017-01-01
Gun violence in the United States of America is a large public health problem that disproportionately affects urban areas. The epidemiology of gun violence reflects various aspects of an infectious disease including spatial and temporal clustering. We examined the spatial and temporal trends of gun violence in Syracuse, New York, a city of 145,000. We used a spatial scan statistic to reveal spatio-temporal clusters of gunshots investigated and corroborated by Syracuse City Police Department for the years 2009–2015. We also examined predictors of areas with increased gun violence using a multi-level zero-inflated Poisson regression with data from the 2010 census. Two space-time clusters of gun violence were revealed in the city. Higher rates of segregation, poverty and the summer months were all associated with increased risk of gun violence. Previous gunshots in the area were associated with a 26.8% increase in the risk of gun violence. Gun violence in Syracuse, NY is both spatially and temporally stable, with some neighborhoods of the city greatly afflicted. PMID:28319125
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Racial differences in parenting style typologies and heavy episodic drinking trajectories.
Clark, Trenette T; Yang, Chongming; McClernon, F Joseph; Fuemmeler, Bernard F
2015-07-01
This study examines racial differences between Whites and Blacks in the association of parenting style typologies with changes in heavy episodic drinking from adolescence to young adulthood. The analytic sample consists of 9,942 adolescents drawn from the National Longitudinal Study of Adolescent Health, which followed respondents from ages 12 to 31 years. Confirmatory factor analysis and factor mixture modeling are used to classify parenting style typologies based on measures of parental acceptance and control. Heavy Episodic Drinking (HED) trajectories are evaluated using a zero-inflated Poisson multigroup latent growth curve modeling approach. The mixture model identified 4 heterogeneous groups that differed based on the 2 latent variables (parental acceptance and control): balanced (65.8% of the sample), authoritarian (12.2%), permissive (19.4%), and uninvolved or neglectful (2.7%). Regardless of race, we found that at age 12 years, children of authoritarian parents have a higher probability of not engaging in HED than children of parents with balanced, permissive, or neglectful parenting styles. However, among Black youth who reported HED at age 12, authoritarian parenting was associated with greater level of HED at age 12 but a less steep increase in level of HED as age increased yearly as compared with balanced parenting. For White adolescents, uninvolved, permissive, and authoritarian parenting were not associated with a greater level of HED as age increased yearly as compared with adolescents exposed to balanced parenting. The influence of parenting styles on HED during adolescence persists into young adulthood and differs by race for youth engaging in HED. (c) 2015 APA, all rights reserved.
Racial Differences in Parenting Style Typologies and Heavy Episodic Drinking Trajectories
Clark, Trenette T.; Yang, Chongming; McClernon, F. Joseph; Fuemmeler, Bernard
2014-01-01
Objective This study examines racial differences between Caucasians and African Americans in the association of parenting style typologies with changes in heavy episodic drinking from adolescence to young adulthood. Methods The analytic sample consists of 9,942 adolescents drawn from the National Longitudinal Study of Adolescent Health, which followed respondents from ages 12 to 31 years. Confirmatory factor analysis and factor mixture modeling are used to classify parenting style typologies based on measures of parental acceptance and control. HED trajectories are evaluated using a zero-inflated Poisson multigroup latent growth curve modeling approach. Results The mixture model identified four heterogeneous groups that differed based on the two latent variables (parental acceptance and control): balanced (65.8% of the sample), authoritarian (12.2%), permissive (19.4%), and uninvolved/neglectful (2.7%). Regardless of race, we found that at age 12 years, children of authoritarian parents have a higher probability of not engaging in HED than children of parents with balanced, permissive, or neglectful parenting styles. However, among African American youth who reported HED at age 12, authoritarian parenting was associated with greater level of HED at age 12 but a less steep increase in level of HED as age increased yearly as compared with balanced parenting. For Caucasian adolescents, uninvolved, permissive, and authoritarian parenting were not associated with a greater level of HED as age increased yearly as compared with adolescents exposed to balanced parenting. Conclusion The influence of parenting styles on HED during adolescence persists into young adulthood and differs by race for youth engaging in HED. PMID:25222086
Mental illness in bariatric surgery: A cohort study from the PORTAL network.
Fisher, David; Coleman, Karen J; Arterburn, David E; Fischer, Heidi; Yamamoto, Ayae; Young, Deborah R; Sherwood, Nancy E; Trinacty, Connie Mah; Lewis, Kristina H
2017-05-01
To compare bariatric surgery outcomes according to preoperative mental illness category. Electronic health record data from several US healthcare systems were used to compare outcomes of four groups of patients who underwent bariatric surgery in 2012 and 2013. These included the following: people with (1) no mental illness, (2) mild-to-moderate depression or anxiety, (3) severe depression or anxiety, and (4) bipolar, psychosis, or schizophrenia spectrum disorders. Groups were compared on weight loss trajectory using generalized estimating equations using B-spline bases and on all-cause emergency department visits and hospital days using zero-inflated Poisson and negative binomial regression up to 2 years after surgery. Models were adjusted for demographic and health covariates, including baseline healthcare use. Among 8,192 patients, mean age was 44.3 (10.7) years, 79.9% were female, and 45.6% were white. Fifty-seven percent had preoperative mental illness. There were no differences between groups for weight loss, but patients with preoperative severe depression or anxiety or bipolar, psychosis, or schizophrenia spectrum disorders had higher follow-up levels of emergency department visits and hospital days compared to those with no mental illness. In this multicenter study, mental illness was not associated with differential weight loss after bariatric surgery, but additional research could focus on reducing acute care use among these patients. © 2017 The Obesity Society.
Lee, Chien-Ti; Clark, Trenette T; Kollins, Scott H; McClernon, F Joseph; Fuemmeler, Bernard F
2015-03-01
This study examined the influence of Attention Deficit Hyperactivity Disorder (ADHD) symptoms severity and directionality (hyperactive-impulsive symptoms relative to inattentive symptoms) on trajectories of the probability of current (past month) smoking and the number of cigarettes smoked from age 13 to 32. Racial and gender differences in the relationship of ADHD symptoms and smoking trajectories were also assessed. A subsample of 9719 youth (54.5% female) was drawn from the National Longitudinal Study of Adolescent to Adult Health (Add Health). Cohort sequential design and zero-inflated Poisson (ZIP) latent growth modeling were used to estimate the relationship between ADHD directionality and severity on smoking development. ADHD severity's effect on the likelihood of ever smoking cigarettes at the intercept (age 13) had a greater impact on White males than other groups. ADHD severity also had a stronger influence on the initial number of cigarettes smoked at age 13 among Hispanic participants. The relationships between ADHD directionality (hyperactive-impulsive symptoms relative to inattentive symptoms) and a higher number of cigarettes smoked at the intercept were stronger among Hispanic males than others. Gender differences manifested only among Whites. ADHD severity and directionality had unique effects on smoking trajectories. Our results also highlight that the risk of ADHD symptoms may differ by race and gender. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Farstad, Sarah M; von Ranson, Kristin M; Hodgins, David C; El-Guebaly, Nady; Casey, David M; Schopflocher, Don P
2015-09-01
This study investigated the degree to which facets of impulsiveness predicted future binge eating and problem gambling, 2 theorized forms of behavioral addiction. Participants were 596 women and 406 men from 4 age cohorts randomly recruited from a Canadian province. Participants completed self-report measures of 3 facets of impulsiveness (negative urgency, sensation seeking, lack of persistence), binge-eating frequency, and problem-gambling symptoms. Impulsiveness was assessed at baseline, and assessments of binge eating and problem gambling were followed up after 3 years. Weighted data were analyzed using zero-inflated negative binomial and Poisson regression models. We found evidence of transdiagnostic and disorder-specific predictors of binge eating and problem gambling. Negative urgency emerged as a common predictor of binge eating and problem gambling among women and men. There were disorder-specific personality traits identified among men only: High lack-of-persistence scores predicted binge eating and high sensation-seeking scores predicted problem gambling. Among women, younger age predicted binge eating and older age predicted problem gambling. Thus, there are gender differences in facets of impulsiveness that longitudinally predict binge eating and problem gambling, suggesting that treatments for these behaviors should consider gender-specific personality and demographic traits in addition to the common personality trait of negative urgency. (c) 2015 APA, all rights reserved).
Association of Pediatric Abusive Head Trauma Rates With Macroeconomic Indicators.
Wood, Joanne N; French, Benjamin; Fromkin, Janet; Fakeye, Oludolapo; Scribano, Philip V; Letson, Megan M; Makoroff, Kathi L; Feldman, Kenneth W; Fabio, Anthony; Berger, Rachel
2016-04-01
We aimed to examine abusive head trauma (AHT) incidence before, during and after the recession of 2007-2009 in 3 US regions and assess the association of economic measures with AHT incidence. Data for children <5 years old diagnosed with AHT between January 1, 2004, and December 31, 2012, in 3 regions were linked to county-level economic data using an ecologic time series analysis. Associations between county-level AHT rates and recession period as well as employment growth, mortgage delinquency, and foreclosure rates were examined using zero-inflated Poisson regression models. During the 9-year period, 712 children were diagnosed with AHT. The mean rate of AHT per 100,000 child-years increased from 9.8 before the recession to 15.6 during the recession before decreasing to 12.8 after the recession. The AHT rates after the recession were higher than the rates before the recession (incidence rate ratio 1.31, P = .004) but lower than rates during the recession (incidence rate ratio 0.78, P = .005). There was no association between the AHT rate and employment growth, mortgage delinquency rates, or foreclosure rates. In the period after the recession, AHT rate was lower than during the recession period yet higher than the level before the recession, suggesting a lingering effect of the economic stress of the recession on maltreatment risk. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Non-linear properties of metallic cellular materials with a negative Poisson's ratio
NASA Technical Reports Server (NTRS)
Choi, J. B.; Lakes, R. S.
1992-01-01
Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.
Xia, Yinglin; Morrison-Beedy, Dianne; Ma, Jingming; Feng, Changyong; Cross, Wendi; Tu, Xin
2012-01-01
Modeling count data from sexual behavioral outcomes involves many challenges, especially when the data exhibit a preponderance of zeros and overdispersion. In particular, the popular Poisson log-linear model is not appropriate for modeling such outcomes. Although alternatives exist for addressing both issues, they are not widely and effectively used in sex health research, especially in HIV prevention intervention and related studies. In this paper, we discuss how to analyze count outcomes distributed with excess of zeros and overdispersion and introduce appropriate model-fit indices for comparing the performance of competing models, using data from a real study on HIV prevention intervention. The in-depth look at these common issues arising from studies involving behavioral outcomes will promote sound statistical analyses and facilitate research in this and other related areas. PMID:22536496
The zero inflation of standing dead tree carbon stocks
Christopher W. Woodall; David W. MacFarlane
2012-01-01
Given the importance of standing dead trees in numerous forest ecosystem attributes/processes such as carbon (C) stocks, the USDA Forest Serviceâs Forest Inventory and Analysis (FIA) program began consistent nationwide sampling of standing dead trees in 1999. Modeled estimates of standing dead tree C stocks are currently used as the official C stock estimates for the...
Effectiveness on Early Childhood Caries of an Oral Health Promotion Program for Medical Providers
Widmer-Racich, Katina; Sevick, Carter; Starzyk, Erin J.; Mauritson, Katya; Hambidge, Simon J.
2017-01-01
Objectives. To assess an oral health promotion (OHP) intervention for medical providers’ impact on early childhood caries (ECC). Methods. We implemented a quasiexperimental OHP intervention in 8 federally qualified health centers that trained medical providers on ECC risk assessment, oral examination and instruction, dental referral, and fluoride varnish applications (FVAs). We measured OHP delivery by FVA count at medical visits. We measured the intervention’s impact on ECC in 3 unique cohorts of children aged 3 to 4 years in 2009 (preintervention; n = 202), 2011 (midintervention; n = 420), and 2015 (≥ 4 FVAs; n = 153). We compared numbers of decayed, missing, and filled tooth surfaces using adjusted zero-inflated negative binomial models. Results. Across 3 unique cohorts, the FVA mean (range) count was 0.0 (0), 1.1 (0–7), and 4.5 (4–7) in 2009, 2011, and 2015, respectively. In adjusted zero-inflated negative binomial models analyses, children in the 2015 cohort had significantly fewer decayed, missing, and filled tooth surfaces than did children in previous cohorts. Conclusions. An OHP intervention targeting medical providers reduced ECC when children received 4 or more FVAs at a medical visit by age 3 years. PMID:28661802
Two-Part and Related Regression Models for Longitudinal Data
Farewell, V.T.; Long, D.L.; Tom, B.D.M.; Yiu, S.; Su, L.
2017-01-01
Statistical models that involve a two-part mixture distribution are applicable in a variety of situations. Frequently, the two parts are a model for the binary response variable and a model for the outcome variable that is conditioned on the binary response. Two common examples are zero-inflated or hurdle models for count data and two-part models for semicontinuous data. Recently, there has been particular interest in the use of these models for the analysis of repeated measures of an outcome variable over time. The aim of this review is to consider motivations for the use of such models in this context and to highlight the central issues that arise with their use. We examine two-part models for semicontinuous and zero-heavy count data, and we also consider models for count data with a two-part random effects distribution. PMID:28890906
General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.
Marmarelis, V Z; Berger, T W
2005-07-01
This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.
31 CFR 356.20 - How does the Treasury determine auction awards?
Code of Federal Regulations, 2010 CFR
2010-07-01
... and bond issues. We set the interest rate at a 1/8 of one percent increment. If a Treasury inflation-protected securities auction results in a negative or zero yield, the interest rate will be set at zero, and...
Buckland, Stephen T.; King, Ruth; Toms, Mike P.
2015-01-01
The development of methods for dealing with continuous data with a spike at zero has lagged behind those for overdispersed or zero‐inflated count data. We consider longitudinal ecological data corresponding to an annual average of 26 weekly maximum counts of birds, and are hence effectively continuous, bounded below by zero but also with a discrete mass at zero. We develop a Bayesian hierarchical Tweedie regression model that can directly accommodate the excess number of zeros common to this type of data, whilst accounting for both spatial and temporal correlation. Implementation of the model is conducted in a Markov chain Monte Carlo (MCMC) framework, using reversible jump MCMC to explore uncertainty across both parameter and model spaces. This regression modelling framework is very flexible and removes the need to make strong assumptions about mean‐variance relationships a priori. It can also directly account for the spike at zero, whilst being easily applicable to other types of data and other model formulations. Whilst a correlative study such as this cannot prove causation, our results suggest that an increase in an avian predator may have led to an overall decrease in the number of one of its prey species visiting garden feeding stations in the United Kingdom. This may reflect a change in behaviour of house sparrows to avoid feeding stations frequented by sparrowhawks, or a reduction in house sparrow population size as a result of sparrowhawk increase. PMID:25737026
Computation of solar perturbations with Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1974-01-01
Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.
Swallow, Ben; Buckland, Stephen T; King, Ruth; Toms, Mike P
2016-03-01
The development of methods for dealing with continuous data with a spike at zero has lagged behind those for overdispersed or zero-inflated count data. We consider longitudinal ecological data corresponding to an annual average of 26 weekly maximum counts of birds, and are hence effectively continuous, bounded below by zero but also with a discrete mass at zero. We develop a Bayesian hierarchical Tweedie regression model that can directly accommodate the excess number of zeros common to this type of data, whilst accounting for both spatial and temporal correlation. Implementation of the model is conducted in a Markov chain Monte Carlo (MCMC) framework, using reversible jump MCMC to explore uncertainty across both parameter and model spaces. This regression modelling framework is very flexible and removes the need to make strong assumptions about mean-variance relationships a priori. It can also directly account for the spike at zero, whilst being easily applicable to other types of data and other model formulations. Whilst a correlative study such as this cannot prove causation, our results suggest that an increase in an avian predator may have led to an overall decrease in the number of one of its prey species visiting garden feeding stations in the United Kingdom. This may reflect a change in behaviour of house sparrows to avoid feeding stations frequented by sparrowhawks, or a reduction in house sparrow population size as a result of sparrowhawk increase. © 2015 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Inflation from Minkowski space
Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico; ...
2014-12-23
Here, we propose a class of scalar models that, once coupled to gravity, lead to cosmologies that smoothly and stably connect an inflationary quasi-de Sitter universe to a low, or even zero-curvature, maximally symmetric spacetime in the asymptotic past, strongly violating the null energy condition (H • >>H2) at intermediate times. The models are deformations of the conformal galileon lagrangian and are therefore based on symmetries, both exact and approximate, that ensure the quantum robustness of the whole picture. The resulting cosmological backgrounds can be viewed as regularized extensions of the galilean genesis scenario, or, equivalently, as ‘early-time-complete’ realizations ofmore » inflation. The late-time inflationary dynamics possesses phenomenologically interesting properties: it can produce a large tensor-to-scalar ratio within the regime of validity of the effective field theory and can lead to sizeable equilateral nongaussianities.« less
The effect of starspots on the radii of low-mass pre-main-sequence stars
NASA Astrophysics Data System (ADS)
Jackson, R. J.; Jeffries, R. D.
2014-07-01
A polytropic model is used to investigate the effects of dark photospheric spots on the evolution and radii of magnetically active, low-mass (M < 0.5 M⊙), pre-main-sequence (PMS) stars. Spots slow the contraction along Hayashi tracks and inflate the radii of PMS stars by a factor of (1 - β)-N compared to unspotted stars of the same luminosity, where β is the equivalent covering fraction of dark starspots and N ≃ 0.45 ± 0.05. This is a much stronger inflation than predicted by Spruit & Weiss for main-sequence stars with the same β, where N ˜ 0.2-0.3. These models have been compared to radii determined for very magnetically active K- and M-dwarfs in the young Pleiades and NGC 2516 clusters, and the radii of tidally locked, low-mass eclipsing binary components. The binary components and zero-age main-sequence K-dwarfs have radii inflated by ˜10 per cent compared to an empirical radius-luminosity relation that is defined by magnetically inactive field dwarfs with interferometrically measured radii; low-mass M-type PMS stars, that are still on their Hayashi tracks, are inflated by up to ˜40 per cent. If this were attributable to starspots alone, we estimate that an effective spot coverage of 0.35 < β < 0.51 is required. Alternatively, global inhibition of convective flux transport by dynamo-generated fields may play a role. However, we find greater consistency with the starspot models when comparing the loci of active young stars and inactive field stars in colour-magnitude diagrams, particularly for the highly inflated PMS stars, where the large, uniform temperature reduction required in globally inhibited convection models would cause the stars to be much redder than observed.
Martin, Summer L; Stohs, Stephen M; Moore, Jeffrey E
2015-03-01
Fisheries bycatch is a global threat to marine megafauna. Environmental laws require bycatch assessment for protected species, but this is difficult when bycatch is rare. Low bycatch rates, combined with low observer coverage, may lead to biased, imprecise estimates when using standard ratio estimators. Bayesian model-based approaches incorporate uncertainty, produce less volatile estimates, and enable probabilistic evaluation of estimates relative to management thresholds. Here, we demonstrate a pragmatic decision-making process that uses Bayesian model-based inferences to estimate the probability of exceeding management thresholds for bycatch in fisheries with < 100% observer coverage. Using the California drift gillnet fishery as a case study, we (1) model rates of rare-event bycatch and mortality using Bayesian Markov chain Monte Carlo estimation methods and 20 years of observer data; (2) predict unobserved counts of bycatch and mortality; (3) infer expected annual mortality; (4) determine probabilities of mortality exceeding regulatory thresholds; and (5) classify the fishery as having low, medium, or high bycatch impact using those probabilities. We focused on leatherback sea turtles (Dermochelys coriacea) and humpback whales (Megaptera novaeangliae). Candidate models included Poisson or zero-inflated Poisson likelihood, fishing effort, and a bycatch rate that varied with area, time, or regulatory regime. Regulatory regime had the strongest effect on leatherback bycatch, with the highest levels occurring prior to a regulatory change. Area had the strongest effect on humpback bycatch. Cumulative bycatch estimates for the 20-year period were 104-242 leatherbacks (52-153 deaths) and 6-50 humpbacks (0-21 deaths). The probability of exceeding a regulatory threshold under the U.S. Marine Mammal Protection Act (Potential Biological Removal, PBR) of 0.113 humpback deaths was 0.58, warranting a "medium bycatch impact" classification of the fishery. No PBR thresholds exist for leatherbacks, but the probability of exceeding an anticipated level of two deaths per year, stated as part of a U.S. Endangered Species Act assessment process, was 0.0007. The approach demonstrated here would allow managers to objectively and probabilistically classify fisheries with respect to bycatch impacts on species that have population-relevant mortality reference points, and declare with a stipulated level of certainty that bycatch did or did not exceed estimated upper bounds.
NASA Astrophysics Data System (ADS)
Che Awang, Aznida; Azah Samat, Nor
2017-09-01
Leptospirosis is a disease caused by the infection of pathogenic species from the genus of Leptospira. Human can be infected by the leptospirosis from direct or indirect exposure to the urine of infected animals. The excretion of urine from the animal host that carries pathogenic Leptospira causes the soil or water to be contaminated. Therefore, people can become infected when they are exposed to contaminated soil and water by cut on the skin as well as open wound. It also can enter the human body by mucous membrane such nose, eyes and mouth, for example by splashing contaminated water or urine into the eyes or swallowing contaminated water or food. Currently, there is no vaccine available for the prevention or treatment of leptospirosis disease but this disease can be treated if it is diagnosed early to avoid any complication. The disease risk mapping is important in a way to control and prevention of disease. Using a good choice of statistical model will produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for leptospirosis disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and Poisson-gamma model. This paper begins by providing a review of the SMR method and Poisson-gamma model, which we then applied to leptospirosis data of Kelantan, Malaysia. Both results are displayed and compared using graph, tables and maps. The result shows that the second method Poisson-gamma model produces better relative risk estimates compared to the SMR method. This is because the Poisson-gamma model can overcome the drawback of SMR where the relative risk will become zero when there is no observed leptospirosis case in certain regions. However, the Poisson-gamma model also faced problems where the covariate adjustment for this model is difficult and no possibility for allowing spatial correlation between risks in neighbouring areas. The problems of this model have motivated many researchers to introduce other alternative methods for estimating the risk.
Trajectories of suicidal ideation in depressed older adults undergoing antidepressant treatment.
Kasckow, John; Youk, Ada; Anderson, Stewart J; Dew, Mary Amanda; Butters, Meryl A; Marron, Megan M; Begley, Amy E; Szanto, Katalin; Dombrovski, Alexander Y; Mulsant, Benoit H; Lenze, Eric J; Reynolds, Charles F
2016-02-01
Suicide is a public health concern in older adults. Recent cross sectional studies suggest that impairments in executive functioning, memory and attention are associated with suicidal ideation in older adults. It is unknown whether these neuropsychological features predict persistent suicidal ideation. We analyzed data from 468 individuals ≥ age 60 with major depression who received venlafaxine XR monotherapy for up to 16 weeks. We used latent class growth modeling to classify groups of individuals based on trajectories of suicidal ideation. We also examined whether cognitive dysfunction predicted suicidal ideation while controlling for time-dependent variables including depression severity, and age and education. The optimal model using a zero inflated Poisson link classified individuals into four groups, each with a distinct temporal trajectory of suicidal ideation: those with 'minimal suicidal ideation' across time points; those with 'low suicidal ideation'; those with 'rapidly decreasing suicidal ideation'; and those with 'high and persistent suicidal ideation'. Participants in the 'high and persistent suicidal ideation' group had worse scores relative to those in the "rapidly decreasing suicidal ideation" group on the Color-Word 'inhibition/switching' subtest from the Delis-Kaplan Executive Function Scale, worse attention index scores on the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) and worse total RBANS index scores. These findings suggest that individuals with poorer ability to switch between inhibitory and non-inhibitory responses as well as worse attention and worse overall cognitive status are more likely to have persistently higher levels of suicidal ideation. CLINICALTRIAL. NCT00892047. Published by Elsevier Ltd.
Freisthler, Bridget; Gruenewald, Paul J.
2012-01-01
Background Despite well-known associations between heavy drinking and child physical abuse, little is known about specific risks related to drinking different amounts of alcohol in different drinking venues. This study uses a context specific dose-response model to examine how drinking in various venues (e.g., at bars or parties) are related to physically abusive parenting practices while controlling for individual and psychosocial characteristics. Methods Data were collected via a telephone survey of parents in 50 cities in California resulting in 2,163 respondents who reported drinking in the past year. Child physical abuse and corporal punishment were measured using the Conflict Tactics Scale, Parent Child version. Drinking behaviors were measured using continued drinking measures. Data were analyzed using zero inflated Poisson models. Results Drinking at homes, parties or bars more frequently was related to greater frequencies of physically abusive parenting practices. The use of greater amounts of alcohol in association with drinking at bars appeared to increase risks for corporal punishment, a dose-response effect. Dose-response relationships were not found for drinking at homes or parties or drinking at bars for physical abuse nor for drinking at home and parties for corporal punishment. Conclusion Frequencies of using drinking venues, particularly bars and home or parties, are associated with greater use of abusive parenting practices. These findings suggest that a parent’s routine drinking activities place children at different risks for being physically abused. They also suggest that interventions that take into account parents’ alcohol use at drinking venues are an important avenue for secondary prevention efforts. PMID:23316780
Testing antismoking messages for Air Force trainees.
Popova, Lucy; Linde, Brittany D; Bursac, Zoran; Talcott, G Wayne; Modayil, Mary V; Little, Melissa A; Ling, Pamela M; Glantz, Stanton A; Klesges, Robert C
2016-11-01
Young adults in the military are aggressively targeted by tobacco companies and are at high risk of tobacco use. Existing antismoking advertisements developed for the general population might be effective in educating young adults in the military. This study evaluated the effects of different themes of existing antismoking advertisements on perceived harm and intentions to use cigarettes and other tobacco products among Air Force trainees. In a pretest-post-test experiment, 782 Airmen were randomised to view antismoking advertisements in 1 of 6 conditions: anti-industry, health effects+anti-industry, sexual health, secondhand smoke, environment+anti-industry or control. We assessed the effect of different conditions on changes in perceived harm and intentions to use cigarettes, electronic cigarettes, smokeless tobacco, hookah and cigarillos from pretest to post-test with multivariable linear regression models (perceived harm) and zero-inflated Poisson regression model (intentions). Antismoking advertisements increased perceived harm of various tobacco products and reduced intentions to use. Advertisements featuring negative effects of tobacco on health and sexual performance coupled with revealing tobacco industry manipulations had the most consistent pattern of effects on perceived harm and intentions. Antismoking advertisements produced for the general public might also be effective with a young adult military population and could have spillover effects on perceptions of harm and intentions to use other tobacco products besides cigarettes. Existing antismoking advertising may be a cost-effective tool to educate young adults in the military. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Cranford, James A.; Zucker, Robert A.; Jester, Jennifer M.; Puttler, Leon I.; Fitzgerald, Hiram E.
2010-01-01
Current models of adolescent drinking behavior hypothesize that alcohol expectancies mediate the effects of other proximal and distal risk factors. This longitudinal study tested the hypothesis that the effects of parental alcohol involvement on their children’s drinking behavior in mid-adolescence are mediated by the children’s alcohol expectancies in early adolescence. A sample of 148 initially 9–11 year old boys and their parents from a high-risk population and a contrast group of community families completed measures of drinking behavior and alcohol expectancies over a 6-year interval. We analyzed data from middle childhood (M age = 10.4 years), early adolescence (M age = 13.5 years), and mid-adolescence (M age = 16.5 years). The sample was restricted only to adolescents who had begun to drink by mid-adolescence. Results from zero-inflated Poisson regression analyses showed that 1) maternal drinking during their children’s middle childhood predicted number of drinking days in middle adolescence; 2) negative and positive alcohol expectancies in early adolescence predicted odds of any intoxication in middle adolescence; and 3) paternal alcoholism during their children’s middle childhood and adolescents’ alcohol expectancies in early adolescence predicted frequency of intoxication in middle adolescence. Contrary to predictions, child alcohol expectancies did not mediate the effects of parental alcohol involvement in this high-risk sample. Different aspects of parental alcohol involvement, along with early adolescent alcohol expectancies, independently predicted adolescent drinking behavior in middle adolescence. Alternative pathways for the influence of maternal and paternal alcohol involvement and implications for expectancy models of adolescent drinking behavior were discussed. PMID:20853923
Trajectories of Suicidal Ideation in Depressed Older Adults Undergoing Antidepressant Treatment
Youk, Ada; Anderson, Stewart J.; Dew, Mary Amanda; Butters, Meryl A.; Marron, Megan M.; Begley, Amy E.; Szanto, Katalin; Dombrovski, Alexander Y.; Mulsant, Benoit H.; Lenze, EricJ.; Reynolds, Charles F.
2015-01-01
Suicide is a public health concern in older adults. Recent cross sectional studies suggest that impairments in executive functioning, memory and attention are associated with suicidal ideation in older adults. It is unknown whether these neuropsychological features predict persistent suicidal ideation. We analyzed data from 468 individuals ≥ age 60 with major depression who received venlafaxine XR monotherapy for up to 16 weeks. We used latent class growth modeling to classify groups of individuals based on trajectories of suicidal ideation. We also examined whether cognitive dysfunction predicted suicidal ideation while controlling for time-dependent variables including depression severity, and age and education. The optimal model using a zero inflated Poisson link classified individuals into four groups, each with a distinct temporal trajectory of suicidal ideation: those with ‘minimal suicidal ideation’ across time points; those with ‘low suicidal ideation’; those with ‘rapidly decreasing suicidal ideation’; and those with ‘high and persistent suicidal ideation’. Participants in the ‘high and persistent suicidal ideation’ group had worse scores relative to those in the “rapidly decreasing suicidal ideation” group on the Color-Word ‘inhibition/switching’ subtest from the Delis-Kaplan Executive Function Scale, worse attention index scores on the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) and worse total RBANS index scores. These findings suggest that individuals with poorer ability to switch between inhibitory and non-inhibitory responses as well as worse attention and worse overall cognitive status are more likely to have persistently higher levels of suicidal ideation. PMID:26708830
Lee, JuHee; Park, Chang Gi; Choi, Moonki
2016-05-01
This study was conducted to identify risk factors that influence regular exercise among patients with Parkinson's disease in Korea. Parkinson's disease is prevalent in the elderly, and may lead to a sedentary lifestyle. Exercise can enhance physical and psychological health. However, patients with Parkinson's disease are less likely to exercise than are other populations due to physical disability. A secondary data analysis and cross-sectional descriptive study were conducted. A convenience sample of 106 patients with Parkinson's disease was recruited at an outpatient neurology clinic of a tertiary hospital in Korea. Demographic characteristics, disease-related characteristics (including disease duration and motor symptoms), self-efficacy for exercise, balance, and exercise level were investigated. Negative binomial regression and zero-inflated negative binomial regression for exercise count data were utilized to determine factors involved in exercise. The mean age of participants was 65.85 ± 8.77 years, and the mean duration of Parkinson's disease was 7.23 ± 6.02 years. Most participants indicated that they engaged in regular exercise (80.19%). Approximately half of participants exercised at least 5 days per week for 30 min, as recommended (51.9%). Motor symptoms were a significant predictor of exercise in the count model, and self-efficacy for exercise was a significant predictor of exercise in the zero model. Severity of motor symptoms was related to frequency of exercise. Self-efficacy contributed to the probability of exercise. Symptom management and improvement of self-efficacy for exercise are important to encourage regular exercise in patients with Parkinson's disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Bauer, Austin A; Clayton, Murray K; Brunet, Johanne
2017-05-01
The ability to attract pollinators is crucial to plants that rely on insects for pollination. We contrasted the roles of floral display size and flower color in attracting three bee species and determined the relationships between plant attractiveness (number of pollinator visits) and seed set for each bee species. We recorded pollinator visits to plants, measured plant traits, and quantified plant reproductive success. A zero-inflated Poisson regression model indicated plant traits associated with pollinator attraction. It identified traits that increased the number of bee visits and traits that increased the probability of a plant not receiving any visits. Different components of floral display size were examined and two models of flower color contrasted. Relationships between plant attractiveness and seed set were determined using regression analyses. Plants with more racemes received more bee visits from all three bee species. Plants with few racemes were more likely not to receive any bee visits. The role of flower color varied with bee species and was influenced by the choice of the flower color model. Increasing bee visits increased seed set for all three bee species, with the steepest slope for leafcutting bees, followed by bumble bees, and finally honey bees. Floral display size influenced pollinator attraction more consistently than flower color. The same plant traits affected the probability of not being visited and the number of pollinator visits received. The impact of plant attractiveness on female reproductive success varied, together with pollinator effectiveness, by pollinator species. © 2017 Bauer et al. Published by the Botanical Society of America. This work is licensed under a Creative Commons public domain license (CC0 1.0).
Bronner, Anne; Hénaux, Viviane; Vergne, Timothée; Vinard, Jean-Luc; Morignat, Eric; Hendrikx, Pascal; Calavas, Didier; Gay, Emilie
2013-01-01
The mandatory bovine abortion notification system in France aims to detect as soon as possible any resurgence of bovine brucellosis. However, under-reporting seems to be a major limitation of this system. We used a unilist capture-recapture approach to assess the sensitivity, i.e. the proportion of farmers who reported at least one abortion among those who detected such events, and representativeness of the system during 2006-2011. We implemented a zero-inflated Poisson model to estimate the proportion of farmers who detected at least one abortion, and among them, the proportion of farmers not reporting. We also applied a hurdle model to evaluate the effect of factors influencing the notification process. We found that the overall surveillance sensitivity was about 34%, and was higher in beef than dairy cattle farms. The observed increase in the proportion of notifying farmers from 2007 to 2009 resulted from an increase in the surveillance sensitivity in 2007/2008 and an increase in the proportion of farmers who detected at least one abortion in 2008/2009. These patterns suggest a raise in farmers' awareness in 2007/2008 when the Bluetongue Virus (BTV) was detected in France, followed by an increase in the number of abortions in 2008/2009 as BTV spread across the country. Our study indicated a lack of sensitivity of the mandatory bovine abortion notification system, raising concerns about the ability to detect brucellosis outbreaks early. With the increasing need to survey the zoonotic Rift Valley Fever and Q fever diseases that may also cause bovine abortions, our approach is of primary interest for animal health stakeholders to develop information programs to increase abortion notifications. Our framework combining hurdle and ZIP models may also be applied to estimate the completeness of other clinical surveillance systems.
Bronner, Anne; Hénaux, Viviane; Vergne, Timothée; Vinard, Jean-Luc; Morignat, Eric; Hendrikx, Pascal; Calavas, Didier; Gay, Emilie
2013-01-01
The mandatory bovine abortion notification system in France aims to detect as soon as possible any resurgence of bovine brucellosis. However, under-reporting seems to be a major limitation of this system. We used a unilist capture-recapture approach to assess the sensitivity, i.e. the proportion of farmers who reported at least one abortion among those who detected such events, and representativeness of the system during 2006–2011. We implemented a zero-inflated Poisson model to estimate the proportion of farmers who detected at least one abortion, and among them, the proportion of farmers not reporting. We also applied a hurdle model to evaluate the effect of factors influencing the notification process. We found that the overall surveillance sensitivity was about 34%, and was higher in beef than dairy cattle farms. The observed increase in the proportion of notifying farmers from 2007 to 2009 resulted from an increase in the surveillance sensitivity in 2007/2008 and an increase in the proportion of farmers who detected at least one abortion in 2008/2009. These patterns suggest a raise in farmers’ awareness in 2007/2008 when the Bluetongue Virus (BTV) was detected in France, followed by an increase in the number of abortions in 2008/2009 as BTV spread across the country. Our study indicated a lack of sensitivity of the mandatory bovine abortion notification system, raising concerns about the ability to detect brucellosis outbreaks early. With the increasing need to survey the zoonotic Rift Valley Fever and Q fever diseases that may also cause bovine abortions, our approach is of primary interest for animal health stakeholders to develop information programs to increase abortion notifications. Our framework combining hurdle and ZIP models may also be applied to estimate the completeness of other clinical surveillance systems. PMID:23691004
Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation
NASA Astrophysics Data System (ADS)
Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo
2014-03-01
Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.
Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...
What Do Test Score Really Mean? A Latent Class Analysis of Danish Test Score Performance
ERIC Educational Resources Information Center
McIntosh, James; Munk, Martin D.
2014-01-01
Latent class Poisson count models are used to analyse a sample of Danish test score results from a cohort of individuals born in 1954-1955, tested in 1968, and followed until 2011. The procedure takes account of unobservable effects as well as excessive zeros in the data. We show that the test scores measure manifest or measured ability as it has…
Klapilová, Kateřina; Cobey, Kelly D; Wells, Timothy; Roberts, S Craig; Weiss, Petr; Havlíček, Jan
2014-01-10
Data from 1155 Czech women (493 using oral contraception, 662 non-users), obtained from the Czech National Survey of Sexual Behavior, were used to investigate evolutionary-based hypotheses concerning the predictive value of current oral contraceptive (OC) use on extra-pair and dyadic (in-pair) sexual behavior of coupled women. Specifically, the aim was to determine whether current OC use was associated with lower extra-pair and higher in-pair sexual interest and behavior, because OC use suppresses cyclical shifts in mating psychology that occur in normally cycling women. Zero-inflated Poisson (ZIP) regression and negative binomial models were used to test associations between OC use and these sexual measures, controlling for other relevant predictors (e.g., age, parity, in-pair sexual satisfaction, relationship length). The overall incidence of having had an extra-pair partner or one-night stand in the previous year was not related to current OC use (the majority of the sample had not). However, among the women who had engaged in extra-pair sexual behavior, OC users had fewer one-night stands than non-users, and tended to have fewer partners, than non-users. OC users also had more frequent dyadic intercourse than non-users, potentially indicating higher commitment to their current relationship. These results suggest that suppression of fertility through OC use may alter important aspects of female sexual behavior, with potential implications for relationship functioning and stability.
The Effects of a Park Awareness Campaign on Rural Park Use and Physical Activity.
Banda, Jorge A; Hooker, Steven P; Wilcox, Sara; Colabianchi, Natalie; Kaczynski, Andrew T; Hussey, James
To examine the effects of a park awareness campaign on park use in 6 community parks. One-group pretest-posttest design. Six community parks located in a South Carolina county. Children, adolescents, and adults observed in community parks. A 1-month awareness campaign that culminated in single 1.5-hour events at 6 parks in April 2011 and May 2011. The System for Observing Play and Recreation in Communities was used to objectively measure park use in May 2010 (baseline) and May 2011 (postcampaign). Zero-inflated Poisson models tested whether the number of total park users and the number of park users engaged in sedentary, walking, and vigorous activities differed by observation date. Park use was significantly greater at baseline than postcampaign (97 vs 84 users, respectively; χ = 4.69, P = .03). There were no significant differences in the number of park users engaged in sedentary (χ = 2.45, P = .12), walking (χ = 0.29, P = .59), and vigorous (χ = 0.20, P = .65) activities between baseline and postcampaign. Although only 97 and 84 people were observed across all parks at baseline and postcampaign, a total of 629 people were observed during the 6 separate 1.5-hour campaign park events. This suggests that there is potential for greater park utilization in these communities, and important questions remain on how to conduct effective awareness campaigns and how to harness interest in park events for the purpose of contributing to future community-wide physical activity and health promotion efforts.
Family composition and children's dental health behavior: evidence from Germany.
Listl, Stefan
2011-01-01
To assess whether children's dental health behavior differs between family compositions of either natural parents or birth mothers together with stepfathers. We use data from the German Health Interview and Examination Survey Children and Adolescents (KiGGS) public use file. This is the first nationally r ep resentative sample on child health in Germany and particularly contains variables for dental attendance, tooth care, and eating behavior of 13,904 children below 14 years of age. A series of zero-inflated Poisson, ordinary least squares, binary, and ordered logistic regression models was set up in order to identify whether family composition is a significant explanatory variable for children's dental health behavior. Family composition turned out as a significant parameter for some aspects of children's dental health behavior. Specifically, children who grow up in families with a birth mother and a stepfather have only half the probability to access dental services but, once seeking treatment, the number of visits is significantly higher in comparison with children raised by their natural parents. Moreover, children growing up in such a patchwork family setting consume a higher amount of sugary foods and drinks. This appears mainly attributable to differential consumption habits for juices, cookies, and chocolate. Children who grow up in settings other than the nuclear family may develop different dental health behaviors than children who grow up with both natural parents, albeit more research is needed to identify the extent to which such behavioral changes lead to variations in caries occurrence.
Lau, Erica Y; Lau, Patrick W C; Cai, Bo; Archer, Edward
2015-01-01
This study examined the effects of text message content (generic vs. culturally tailored) on the login rate of an Internet physical activity program in Hong Kong Chinese adolescent school children. A convenience sample of 252 Hong Kong secondary school adolescents (51% female, 49% male; M age = 13.17 years, SD = 1.28 years) were assigned to one of 3 treatments for 8 weeks. The control group consisted of an Internet physical activity program. The Internet plus generic text message group consisted of the same Internet physical activity program and included daily generic text messages. The Internet plus culturally tailored text message group consisted of the Internet physical activity program and included daily culturally tailored text messages. Zero-inflated Poisson mixed models showed that the overall effect of the treatment group on the login rates varied significantly across individuals. The login rates over time were significantly higher in the Internet plus culturally tailored text message group than the control group (β = 46.06, 95% CI 13.60, 156.02; p = .002) and the Internet plus generic text message group (β = 15.80, 95% CI 4.81, 51.9; p = .021) after adjusting for covariates. These findings suggest that culturally tailored text messages may be more advantageous than generic text messages on improving adolescents' website login rate, but effects varied significantly across individuals. Our results support the inclusion of culturally tailored messaging in future online physical activity interventions.
Galerkin methods for Boltzmann-Poisson transport with reflection conditions on rough boundaries
NASA Astrophysics Data System (ADS)
Morales Escalante, José A.; Gamba, Irene M.
2018-06-01
We consider in this paper the mathematical and numerical modeling of reflective boundary conditions (BC) associated to Boltzmann-Poisson systems, including diffusive reflection in addition to specularity, in the context of electron transport in semiconductor device modeling at nano scales, and their implementation in Discontinuous Galerkin (DG) schemes. We study these BC on the physical boundaries of the device and develop a numerical approximation to model an insulating boundary condition, or equivalently, a pointwise zero flux mathematical condition for the electron transport equation. Such condition balances the incident and reflective momentum flux at the microscopic level, pointwise at the boundary, in the case of a more general mixed reflection with momentum dependant specularity probability p (k →). We compare the computational prediction of physical observables given by the numerical implementation of these different reflection conditions in our DG scheme for BP models, and observe that the diffusive condition influences the kinetic moments over the whole domain in position space.
Jekauc, Darko; Völkle, Manuel; Wagner, Matthias O.; Mess, Filip; Reiner, Miriam; Renner, Britta
2015-01-01
In the processes of physical activity (PA) maintenance specific predictors are effective, which differ from other stages of PA development. Recently, Physical Activity Maintenance Theory (PAMT) was specifically developed for prediction of PA maintenance. The aim of the present study was to evaluate the predictability of the future behavior by the PAMT and compare it with the Theory of Planned Behavior (TPB) and Social Cognitive Theory (SCT). Participation rate in a fitness center was observed for 101 college students (53 female) aged between 19 and 32 years (M = 23.6; SD = 2.9) over 20 weeks using a magnetic card. In order to predict the pattern of participation TPB, SCT and PAMT were used. A latent class zero-inflated Poisson growth curve analysis identified two participation patterns: regular attenders and intermittent exercisers. SCT showed the highest predictive power followed by PAMT and TPB. Impeding aspects as life stress and barriers were the strongest predictors suggesting that overcoming barriers might be an important aspect for working out on a regular basis. Self-efficacy, perceived behavioral control, and social support could also significantly differentiate between the participation patterns. PMID:25717313
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
NASA Astrophysics Data System (ADS)
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
Enhanced polarization of the cosmic microwave background radiation from thermal gravitational waves.
Bhattacharya, Kaushik; Mohanty, Subhendra; Nautiyal, Akhilesh
2006-12-22
If inflation was preceded by a radiation era, then at the time of inflation there will exist a decoupled thermal distribution of gravitons. Gravitational waves generated during inflation will be amplified by the process of stimulated emission into the existing thermal distribution of gravitons. Consequently, the usual zero temperature scale invariant tensor spectrum is modified by a temperature dependent factor. This thermal correction factor amplifies the B-mode polarization of the cosmic microwave background radiation by an order of magnitude at large angles, which may now be in the range of observability of the Wilkinson Microwave Anisotropy Probe.
Fast-food exposure around schools in urban Adelaide.
Coffee, Neil T; Kennedy, Hannah P; Niyonsenga, Theo
2016-12-01
To assess whether exposure to fast-food outlets around schools differed depending on socio-economic status (SES). Binary logistic regression was used to investigate the presence and zero-inflated Poisson regression was used for the count (due to the excess of zeroes) of fast food within 1000 m and 15000 m road network buffers around schools. The low and middle SES tertiles were combined due to a lack of significant variation as the 'disadvantaged' group and compared with the high SES tertile as the 'advantaged' group. School SES was expressed using the 2011 Australian Bureau of Statistics, socio-economic indices for areas, index of relative socio-economic disadvantage. Fast-food data included independent takeaway food outlets and major fast-food chains. Metropolitan Adelaide, South Australia. A total of 459 schools were geocoded to the street address and 1000 m and 1500 m road network distance buffers calculated. There was a 1·6 times greater risk of exposure to fast food within 1000 m (OR=1·634; 95 % 1·017, 2·625) and a 9·5 times greater risk of exposure to a fast food within 1500 m (OR=9·524; 95 % CI 3·497, 25·641) around disadvantaged schools compared with advantaged schools. Disadvantaged schools were exposed to more fast food, with more than twice the number of disadvantaged schools exposed to fast food. The higher exposure to fast food near more disadvantaged schools may reflect lower commercial land cost in low-SES areas, potentially creating more financially desirable investments for fast-food developers.
Football goal distributions and extremal statistics
NASA Astrophysics Data System (ADS)
Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.
2002-12-01
We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.
1989-02-01
the at detecting changes to a trend by averaging closer to zero than 2 percent. accelerator, and the car’s speed can be highlighting the underlying...competent boat, it might initially appear that the zero ; and physicians in the technology of information Department of Interior had acted arbi...of Economists cannot predict inflation; lack of specificity in the regula- the organization. mathematicians cannot divide by zero ; tions regarding
Using Response Surface Methods to Correlate the Modal Test of an Inflatable Test Article
NASA Technical Reports Server (NTRS)
Gupta, Anju
2013-01-01
This paper presents a practical application of response surface methods (RSM) to correlate a finite element model of a structural modal test. The test article is a quasi-cylindrical inflatable structure which primarily consists of a fabric weave, with an internal bladder and metallic bulkheads on either end. To mitigate model size, the fabric weave was simplified by representing it with shell elements. The task at hand is to represent the material behavior of the weave. The success of the model correlation is measured by comparing the four major modal frequencies of the analysis model to the four major modal frequencies of the test article. Given that only individual strap material properties were provided and material properties of the overall weave were not available, defining the material properties of the finite element model became very complex. First it was necessary to determine which material properties (modulus of elasticity in the hoop and longitudinal directions, shear modulus, Poisson's ratio, etc.) affected the modal frequencies. Then a Latin Hypercube of the parameter space was created to form an efficiently distributed finite case set. Each case was then analyzed with the results input into RSM. In the resulting response surface it was possible to see how each material parameter affected the modal frequencies of the analysis model. If the modal frequencies of the analysis model and its corresponding parameters match the test with acceptable accuracy, it can be said that the model correlation is successful.
Terminal Duct Lobular Unit Involution of the Normal Breast: Implications for Breast Cancer Etiology
Pfeiffer, Ruth M.; Patel, Deesha A.; Linville, Laura; Brinton, Louise A.; Gierach, Gretchen L.; Yang, Xiaohong R.; Papathomas, Daphne; Visscher, Daniel; Mies, Carolyn; Degnim, Amy C.; Anderson, William F.; Hewitt, Stephen; Khodr, Zeina G.; Clare, Susan E.; Storniolo, Anna Maria; Sherman, Mark E.
2014-01-01
Background Greater degrees of terminal duct lobular unit (TDLU) involution have been linked to lower breast cancer risk; however, factors that influence this process are poorly characterized. Methods To study this question, we developed three reproducible measures that are inversely associated with TDLU involution: TDLU counts, median TDLU span, and median acini counts/TDLU. We determined factors associated with TDLU involution using normal breast tissues from 1938 participants (1369 premenopausal and 569 postmenopausal) ages 18 to 75 years in the Susan G. Komen Tissue Bank at the Indiana University Simon Cancer Center. Multivariable zero-inflated Poisson models were used to estimate relative risks (RRs) and 95% confidence intervals (95% CIs) for factors associated with TDLU counts, and multivariable ordinal logistic regression models were used to estimate odds ratios (ORs) and 95% CIs for factors associated with categories of median TDLU span and acini counts/TDLU. Results All TDLU measures started declining in the third age decade (all measures, two-sided P trend ≤ .001); and all metrics were statistically significantly lower among postmenopausal women. Nulliparous women demonstrated lower TDLU counts compared with uniparous women (among premenopausal women, RR = 0.79, 95% CI = 0.73 to 0.85; among postmenopausal, RR = 0.67, 95% CI = 0.56 to 0.79); however, rates of age-related TDLU decline were faster among parous women. Other factors were related to specific measures of TDLU involution. Conclusion Morphometric analysis of TDLU involution warrants further evaluation to understand the pathogenesis of breast cancer and assessing its role as a progression marker for women with benign biopsies or as an intermediate endpoint in prevention studies. PMID:25274491
Health Care Utilization and Expenditures Attributable to Cigar Smoking Among US Adults, 2000-2015.
Wang, Yingning; Sung, Hai-Yen; Yao, Tingting; Lightwood, James; Max, Wendy
Cigar use in the United States is a growing public health concern because of its increasing popularity. We estimated health care utilization and expenditures attributable to cigar smoking among US adults aged ≥35. We analyzed data on 84 178 adults using the 2000, 2005, 2010, and 2015 National Health Interview Surveys. We estimated zero-inflated Poisson (ZIP) regression models on hospital nights, emergency department (ED) visits, physician visits, and home-care visits as a function of tobacco use status-current sole cigar smokers (ie, smoke cigars only), current poly cigar smokers (smoke cigars and smoke cigarettes or use smokeless tobacco), former sole cigar smokers (used to smoke cigars only), former poly cigar smokers (used to smoke cigars and smoke cigarettes or use smokeless tobacco), other tobacco users (ever smoked cigarettes and used smokeless tobacco but not cigars), and never tobacco users (never smoked cigars, smoked cigarettes, or used smokeless tobacco)-and other covariates. We calculated health care utilization attributable to current and former sole cigar smoking based on the estimated ZIP models, and then we calculated total health care expenditures attributable to cigar smoking. Current and former sole cigar smoking was associated with excess annual utilization of 72 137 hospital nights, 32 748 ED visits, and 420 118 home-care visits. Annual health care expenditures attributable to sole cigar smoking were $284 million ($625 per sole cigar smoker), and total annual health care expenditures attributable to sole and poly cigar smoking were $1.75 billion. Comprehensive tobacco control policies and interventions are needed to reduce cigar smoking and the associated health care burden.
Barton, Christine M.; Zirkle, Keith W.; Greene, Caitlin F.; Newman, Kara B.
2018-01-01
Collisions with glass are a serious threat to avian life and are estimated to kill hundreds of millions of birds per year in the United States. We monitored 22 buildings at the Virginia Tech Corporate Research Center (VTCRC) in Blacksburg, Virginia, for collision fatalities from October 2013 through May 2015 and explored possible effects exerted by glass area and surrounding land cover on avian mortality. We documented 240 individuals representing 55 identifiable species that died due to collisions with windows at the VTCRC. The relative risk of fatal collisions at all buildings over the study period were estimated using a Bayesian hierarchical zero-inflated Poisson model adjusting for percentage of tree and lawn cover within 50 m of buildings, as well as for glass area. We found significant relationships between fatalities and surrounding lawn area (relative risk: 0.96, 95% credible interval: 0.93, 0.98) as well as glass area on buildings (RR: 1.30, 95% CI [1.05–1.65]). The model also found a moderately significant relationship between fatal collisions and the percent land cover of ornamental trees surrounding buildings (RR = 1.02, 95% CI [1.00–1.05]). Every building surveyed had at least one recorded collision death. Our findings indicate that birds collide with VTCRC windows during the summer breeding season in addition to spring and fall migration. The Ruby-throated Hummingbird (Archilochus colubris) was the most common window collision species and accounted for 10% of deaths. Though research has identified various correlates with fatal bird-window collisions, such studies rarely culminate in mitigation. We hope our study brings attention, and ultimately action, to address this significant threat to birds at the VTCRC and elsewhere. PMID:29637021
Rusby, Julie C; Westling, Erika; Crowley, Ryann; Light, John M
2018-02-01
Studies investigating the impact of medical marijuana legalization have found no significant changes in adolescent use. In one of the few studies focused on recreational marijuana, we investigated how recreational marijuana legalization and community sales policy influenced factors that likely impact youth use (youth willingness and intent to use, parent use) as well as youth use. Legalization of recreational marijuana in Oregon coincided with our study on adolescent substance use. Cohort 1 transitioned from 8th to 9th grade prior to legalization and Cohort 2 made this transition during legalization (N = 444; 53% female). Communities were allowed to opt out of sales. Multivariate linear regression models estimated the impact of legalization and community sales policy on changes in attitudes and parent use (2 time points 1 year apart). Zero-inflated Poisson growth curve models estimated the effects on initial levels and rate of change from 8th through 9th grade (4 time points). In communities opting out of sales, the prior-to-legalization cohort was less likely to increase their willingness and intent to use marijuana, and the legalization cohort was more likely to increase intent to use. For youth who used marijuana, legalization was associated with increased use, and those in communities opting out of sales had greater growth in marijuana use. Community policy appears to impact youth attitudes toward, and use of, marijuana. Results suggest that legalization of recreational marijuana did not increase marijuana use for youth who did not use marijuana but did increase use in youth who were already using. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Risk factors related to Toxoplasma gondii seroprevalence in indoor-housed Dutch dairy goats.
Deng, Huifang; Dam-Deisz, Cecile; Luttikholt, Saskia; Maas, Miriam; Nielen, Mirjam; Swart, Arno; Vellema, Piet; van der Giessen, Joke; Opsteegh, Marieke
2016-02-01
Toxoplasma gondii can cause disease in goats, but also has impact on human health through food-borne transmission. Our aims were to determine the seroprevalence of T. gondii infection in indoor-housed Dutch dairy goats and to identify the risk factors related to T. gondii seroprevalence. Fifty-two out of ninety approached farmers with indoor-kept goats (58%) participated by answering a standardized questionnaire and contributing 32 goat blood samples each. Serum samples were tested for T. gondii SAG1 antibodies by ELISA and results showed that the frequency distribution of the log10-transformed OD-values fitted well with a binary mixture of a shifted gamma and a shifted reflected gamma distribution. The overall animal seroprevalence was 13.3% (95% CI: 11.7–14.9%), and at least one seropositive animal was found on 61.5% (95% CI: 48.3–74.7%) of the farms. To evaluate potential risk factors on herd level, three modeling strategies (Poisson, negative binomial and zero-inflated) were compared. The negative binomial model fitted the data best with the number of cats (1–4 cats: IR: 2.6, 95% CI: 1.1–6.5; > = 5 cats:IR: 14.2, 95% CI: 3.9–51.1) and mean animal age (IR: 1.5, 95% CI: 1.1–2.1) related to herd positivity. In conclusion, the ELISA test was 100% sensitive and specific based on binary mixture analysis. T. gondii infection is prevalent in indoor housed Dutch dairy goats but at a lower overall animal level seroprevalence than outdoor farmed goats in other European countries, and cat exposure is an important risk factor.
Fuemmeler, Bernard; Lee, Chien-Ti; Ranby, Krista W; Clark, Trenette; McClernon, F Joseph; Yang, Chongming; Kollins, Scott H
2013-09-01
Characterizing smoking behavior is important for informing etiologic models and targeting prevention efforts. This study explored the effects of both individual- and community-level variables in predicting cigarette use vs. non-use and level of use among adolescents as they transition into adulthood. Data on 14,779 youths (53% female) were drawn from the National Longitudinal Study of Adolescent Health (Add Health); a nationally representative longitudinal cohort. A cohort sequential design allowed for examining trajectories of smoking typologies from age 13 to 32 years. Smoking trajectories were evaluated by using a zero-inflated Poisson (ZIP) latent growth analysis and latent class growth analysis modeling approach. Significant relationships emerged between both individual- and community-level variables and smoking outcomes. Maternal and peer smoking predicted increases in smoking over development and were associated with a greater likelihood of belonging to any of the four identified smoking groups versus Non-Users. Conduct problems and depressive symptoms during adolescence were related to cigarette use versus non-use. State-level prevalence of adolescent smoking was related to greater cigarette use during adolescence. Individual- and community-level variables that distinguish smoking patterns within the population aid in understanding cigarette use versus non-use and the quantity of cigarette use into adulthood. Our findings suggest that efforts to prevent cigarette use would benefit from attention to both parental and peer smoking and individual well-being. Future work is needed to better understand the role of variables in the context of multiple levels (individual and community-level) on smoking trajectories. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Grunblatt, Samuel K.; Huber, Daniel; Gaidos, Eric; Lopez, Eric D.; Howard, Andrew W.; Isaacson, Howard T.; Sinukoff, Evan; Vanderburg, Andrew; Nofi, Larissa; Yu, Jie; North, Thomas S. H.; Chaplin, William; Foreman-Mackey, Daniel; Petigura, Erik; Ansdell, Megan; Weiss, Lauren; Fulton, Benjamin; Lin, Douglas N. C.
2017-12-01
Despite more than 20 years since the discovery of the first gas giant planet with an anomalously large radius, the mechanism for planet inflation remains unknown. Here, we report the discovery of K2-132b, an inflated gas giant planet found with the NASA K2 Mission, and a revised mass for another inflated planet, K2-97b. These planets orbit on ≈9 day orbits around host stars that recently evolved into red giants. We constrain the irradiation history of these planets using models constrained by asteroseismology and Keck/High Resolution Echelle Spectrometer spectroscopy and radial velocity measurements. We measure planet radii of 1.31 ± 0.11 R J and 1.30 ± 0.07 R J, respectively. These radii are typical for planets receiving the current irradiation, but not the former, zero age main-sequence irradiation of these planets. This suggests that the current sizes of these planets are directly correlated to their current irradiation. Our precise constraints of the masses and radii of the stars and planets in these systems allow us to constrain the planetary heating efficiency of both systems as 0.03{ % }-0.02 % +0.03 % . These results are consistent with a planet re-inflation scenario, but suggest that the efficiency of planet re-inflation may be lower than previously theorized. Finally, we discuss the agreement within 10% of the stellar masses and radii, and the planet masses, radii, and orbital periods of both systems, and speculate that this may be due to selection bias in searching for planets around evolved stars.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Treatment of singularities in cracked bodies
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1990-01-01
Three-dimensional finite-element analyses of middle-crack tension (M-T) and bend specimens subjected to mode I loadings were performed to study the stress singularity along the crack front. The specimen was modeled using 20-node isoparametric elements. The displacements and stresses from the analysis were used to estimate the power of singularities using a log-log regression analysis along the crack front. The analyses showed that finite-sized cracked bodies have two singular stress fields of the form rho = C sub o (theta, z) r to the -1/2 power + D sub o (theta, phi) R to the lambda rho power. The first term is the cylindrical singularity with the power -1/2 and is dominant over the middle 96 pct (for Poisson's ratio = 0.3) of the crack front and becomes nearly zero at the free surface. The second singularity is a vertex singularity with the vertex point located at the intersection of the crack front and the free surface. The second term is dominant at the free surface and becomes nearly zero away from the boundary layer. The thickness of the boundary layer depends on Poisson's ratio of the material and is independent of the specimen type. The thickness of the boundary layer varied from 0 pct to about 5 pct of the total specimen thickness as Poisson's ratio varied from 0.0 to 0.45. Because there are two singular stress fields near the free surface, the strain energy release rate (G) is an appropriate parameter to measure the severity of the crack.
Treatment of singularities in cracked bodies
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1989-01-01
Three-dimensional finite-element analyses of middle-crack tension (M-T) and bend specimens subjected to mode I loadings were performed to study the stress singularity along the crack front. The specimen was modeled using 20-node isoparametric elements. The displacements and stresses from the analysis were used to estimate the power of singularities using a log-log regression analysis along the crack front. The analyses showed that finite-sized cracked bodies have two singular stress fields of the form rho = C sub o (theta, z) r to the -1/2 power + D sub o (theta, phi) R to the lambda rho power. The first term is the cylindrical singularity with the power -1/2 and is dominant over the middle 96 pct (for Poisson's ratio = 0.3) of the crack front and becomes nearly zero at the free surface. The second singularity is a vertex singularity with the vertex point located at the intersection of the crack front and the free surface. The second term is dominant at the free surface and becomes nearly zero away from the the boundary layer. The thickness of the boundary layer depends on Poisson's ratio of the material and is independent of the specimen type. The thickness of the boundary layer varied from 0 pct to about 5 pct of the total specimen thickness as Poisson's ratio varied from 0.0 to 0.45. Because there are two singular stress fields near the free surface, the strain energy release rate (G) is an appropriate parameter to measure the severity of the crack.
Lou, Ping; Lee, Jin Yong
2009-04-14
For a simple modified Poisson-Boltzmann (SMPB) theory, taking into account the finite ionic size, we have derived the exact analytic expression for the contact values of the difference profile of the counterion and co-ion, as well as of the sum (density) and product profiles, near a charged planar electrode that is immersed in a binary symmetric electrolyte. In the zero ionic size or dilute limit, these contact values reduce to the contact values of the Poisson-Boltzmann (PB) theory. The analytic results of the SMPB theory, for the difference, sum, and product profiles were compared with the results of the Monte-Carlo (MC) simulations [ Bhuiyan, L. B.; Outhwaite, C. W.; Henderson, D. J. Electroanal. Chem. 2007, 607, 54 ; Bhuiyan, L. B.; Henderson, D. J. Chem. Phys. 2008, 128, 117101 ], as well as of the PB theory. In general, the analytic expression of the SMPB theory gives better agreement with the MC data than the PB theory does. For the difference profile, as the electrode charge increases, the result of the PB theory departs from the MC data, but the SMPB theory still reproduces the MC data quite well, which indicates the importance of including steric effects in modeling diffuse layer properties. As for the product profile, (i) it drops to zero as the electrode charge approaches infinity; (ii) the speed of the drop increases with the ionic size, and these behaviors are in contrast with the predictions of the PB theory, where the product is identically 1.
Heid, Allison R; Pruchno, Rachel; Cartwright, Francine P; Wilson-Genderson, Maureen
2017-07-01
Older adults exposed to natural disasters are at risk for negative psychological outcomes such as post-traumatic stress disorder (PTSD). Neighborhood social capital can act as a resource that supports individual-level coping with stressors. This study explores the ability of perceived neighborhood collective efficacy, a form of social capital, to moderate the association between exposure to Hurricane Sandy and PTSD symptoms in older adults. Data from 2205 older individuals aged 54-80 residing in New Jersey who self-reported exposure to Hurricane Sandy in October of 2012 were identified and extracted from the ORANJ BOWL™ research panel. Participants completed baseline assessments of demographic and individual-level characteristics in 2006-2008 and follow-up assessments about storm exposure, perceived neighborhood collective efficacy (social cohesion and social control), and PTSD symptoms 8-33 months following the storm. Zero-inflated Poisson regression models were tested to examine the association between exposure, neighborhood collective efficacy, and PTSD symptoms. After accounting for known demographic and individual-level covariates, greater storm exposure was linked to higher levels of PTSD symptoms. Social cohesion, but not social control, was linked to lower reports of PTSD symptoms and moderated the association between exposure and PTSD. The impact of storm exposure on PTSD symptoms was less for individuals reporting higher levels of social cohesion. Mental health service providers and disaster preparedness and response teams should consider the larger social network of individuals served. Building social connections in older adults' neighborhoods that promote cohesion can reduce the negative psychological impact of a disaster.
Predictive validity of cannabis consumption measures: Results from a national longitudinal study.
Buu, Anne; Hu, Yi-Han; Pampati, Sanjana; Arterberry, Brooke J; Lin, Hsien-Chang
2017-10-01
Validating the utility of cannabis consumption measures for predicting later cannabis related symptomatology or progression to cannabis use disorder (CUD) is crucial for prevention and intervention work that may use consumption measures for quick screening. This study examined whether cannabis use quantity and frequency predicted CUD symptom counts, progression to onset of CUD, and persistence of CUD. Data from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) at Wave 1 (2001-2002) and Wave 2 (2004-2005) were used to identify three risk samples: (1) current cannabis users at Wave 1 who were at risk for having CUD symptoms at Wave 2; (2) current users without lifetime CUD who were at risk for incident CUD; and (3) current users with past-year CUD who were at risk for persistent CUD. Logistic regression and zero-inflated Poisson models were used to examine the longitudinal effect of cannabis consumption on CUD outcomes. Higher frequency of cannabis use predicted lower likelihood of being symptom-free but it did not predict the severity of CUD symptomatology. Higher frequency of cannabis use also predicted higher likelihood of progression to onset of CUD and persistence of CUD. Cannabis use quantity, however, did not predict any of the developmental stages of CUD symptomatology examined in this study. This study has provided a new piece of evidence to support the predictive validity of cannabis use frequency based on national longitudinal data. The result supports the common practice of including frequency items in cannabis screening tools. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dixon, Ramsay W; Youssef, George J; Hasking, Penelope; Yücel, Murat; Jackson, Alun C; Dowling, Nicki A
2016-07-01
Several factors are associated with an increased risk of adolescent problem gambling, including positive gambling attitudes, higher levels of gambling involvement, ineffective coping strategies and unhelpful parenting practices. It is less clear, however, how these factors interact or influence each other in the development of problem gambling behavior during adolescence. The aim of the current study was to simultaneously explore these predictors, with a particular focus on the extent to which coping skills and parenting styles may moderate the expected association between gambling involvement and gambling problems. Participants were 612 high school students. The data were analyzed using a zero-inflated Poisson (ZIP) regression model, controlling for gender. Although several variables predicted the number of symptoms associated with problem gambling, none of them predicted the probability of displaying any problem gambling. Gambling involvement fully mediated the relationship between positive gambling attitudes and gambling problem severity. There was a significant relationship between gambling involvement and problems at any level of problem focused coping, reference to others and inconsistent discipline. However, adaptive coping styles employed by adolescents and consistent disciplinary practices by parents were buffers of gambling problems at low levels of adolescent gambling involvement, but failed to protect adolescents when their gambling involvement was high. These findings indicate that research exploring the development of gambling problems is required and imply that coping and parenting interventions may have particular utility for adolescents who are at risk of development gambling problems but who are not gambling frequently. Copyright © 2016 Elsevier Ltd. All rights reserved.
Logistic regression for dichotomized counts.
Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W
2016-12-01
Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.
Sousa, Renata M; Ferri, Cleusa P; Acosta, Daisy; Albanese, Emiliano; Guerra, Mariella; Huang, Yueqin; Jacob, K S; Jotheeswaran, A T; Rodriguez, Juan J Llibre; Pichardo, Guillermina Rodriguez; Rodriguez, Marina Calvo; Salas, Aquiles; Sosa, Ana Luisa; Williams, Joseph; Zuniga, Tirso; Prince, Martin
2009-11-28
Disability in elderly people in countries with low and middle incomes is little studied; according to Global Burden of Disease estimates, visual impairment is the leading contributor to years lived with disability in this population. We aimed to assess the contribution of physical, mental, and cognitive chronic diseases to disability, and the extent to which sociodemographic and health characteristics account for geographical variation in disability. We undertook cross-sectional surveys of residents aged older than 65 years (n=15 022) in 11 sites in seven countries with low and middle incomes (China, India, Cuba, Dominican Republic, Venezuela, Mexico, and Peru). Disability was assessed with the 12-item WHO disability assessment schedule 2.0. Dementia, depression, hypertension, and chronic obstructive pulmonary disease were ascertained by clinical assessment; diabetes, stroke, and heart disease by self-reported diagnosis; and sensory, gastrointestinal, skin, limb, and arthritic disorders by self-reported impairment. Independent contributions to disability scores were assessed by zero-inflated negative binomial regression and Poisson regression to generate population-attributable prevalence fractions (PAPF). In regions other than rural India and Venezuela, dementia made the largest contribution to disability (median PAPF 25.1% [IQR 19.2-43.6]). Other substantial contributors were stroke (11.4% [1.8-21.4]), limb impairment (10.5% [5.7-33.8]), arthritis (9.9% [3.2-34.8]), depression (8.3% [0.5-23.0]), eyesight problems (6.8% [1.7-17.6]), and gastrointestinal impairments (6.5% [0.3-23.1]). Associations with chronic diseases accounted for around two-thirds of prevalent disability. When zero inflation was taken into account, between-site differences in disability scores were largely attributable to compositional differences in health and sociodemographic characteristics. On the basis of empirical research, dementia, not blindness, is overwhelmingly the most important independent contributor to disability for elderly people in countries with low and middle incomes. Chronic diseases of the brain and mind deserve increased prioritisation. Besides disability, they lead to dependency and present stressful, complex, long-term challenges to carers. Societal costs are enormous. Wellcome Trust; WHO; US Alzheimer's Association; Fondo Nacional de Ciencia Y Tecnologia, Consejo de Desarrollo Cientifico Y Humanistico, Universidad Central de Venezuela.
Breakdown in the organ donation process and its effect on organ availability.
Razdan, Manik; Degenholtz, Howard B; Kahn, Jeremy M; Driessen, Julia
2015-01-01
Background. This study examines the effect of breakdown in the organ donation process on the availability of transplantable organs. A process breakdown is defined as a deviation from the organ donation protocol that may jeopardize organ recovery. Methods. A retrospective analysis of donation-eligible decedents was conducted using data from an independent organ procurement organization. Adjusted effect of process breakdown on organs transplanted from an eligible decedent was examined using multivariable zero-inflated Poisson regression. Results. An eligible decedent is four times more likely to become an organ donor when there is no process breakdown (adjusted OR: 4.01; 95% CI: 1.6838, 9.6414; P < 0.01) even after controlling for the decedent's age, gender, race, and whether or not a decedent had joined the state donor registry. However once the eligible decedent becomes a donor, whether or not there was a process breakdown does not affect the number of transplantable organs yielded. Overall, for every process breakdown occurring in the care of an eligible decedent, one less organ is available for transplant. Decedent's age is a strong predictor of likelihood of donation and the number of organs transplanted from a donor. Conclusion. Eliminating breakdowns in the donation process can potentially increase the number of organs available for transplant but some organs will still be lost.
A bridge between unified cosmic history by f( R)-gravity and BIonic system
NASA Astrophysics Data System (ADS)
Sepehri, Alireza; Capozziello, Salvatore; Setare, Mohammad Reza
2016-04-01
Recently, the cosmological deceleration-acceleration transition redshift in f( R) gravity has been considered in order to address consistently the problem of cosmic evolution. It is possible to show that the deceleration parameter changes sign at a given redshift according to observational data. Furthermore, a f( R) gravity cosmological model can be constructed in brane-antibrane system starting from the very early universe and accounting for the cosmological redshift at all phases of cosmic history, from inflation to late time acceleration. Here we propose a f( R) model where transition redshifts correspond to inflation-deceleration and deceleration-late time acceleration transitions starting froma BIon system. At the point where the universe was born, due to the transition of k black fundamental strings to the BIon configuration, the redshift is approximately infinity and decreases with reducing temperature (z˜ T2). The BIon is a configuration in flat space of a universe-brane and a parallel anti-universe-brane connected by a wormhole. This wormhole is a channel for flowing energy from extra dimensions into our universe, occurring at inflation and decreasing with redshift as z˜ T^{4+1/7}. Dynamics consists with the fact that the wormhole misses its energy and vanishes as soon as inflation ends and deceleration begins. Approaching two universe branes together, a tachyon is originated, it grows up and causes the formation of a wormhole. We show that, in the framework of f( R) gravity, the cosmological redshift depends on the tachyonic potential and has a significant decrease at deceleration-late time acceleration transition point (z˜ T^{2/3}). As soon as today acceleration approaches, the redshift tends to zero and the cosmological model reduces to the standard Λ CDM cosmology.
Bhattacharya, Kaushik; Mohanty, Subhendra; Rangarajan, Raghavan
2006-03-31
If the initial state of the inflaton field is taken to have a thermal distribution instead of the conventional zero particle vacuum state then the curvature power spectrum gets modified by a temperature dependent factor such that the fluctuation spectrum of the microwave background radiation is enhanced at larger angles. We compare this modified cosmic microwave background spectrum with Wilkinson microwave anisotropy probe data to obtain an upper bound on the temperature of the inflaton at the time our current horizon crossed the horizon during inflation. We further conclude that there must be additional -foldings of inflation beyond what is needed to solve the horizon problem.
NASA Astrophysics Data System (ADS)
Uematsu, Yuki; Netz, Roland R.; Bonthuis, Douwe Jan
2018-02-01
Using a box profile approximation for the non-electrostatic surface adsorption potentials of anions and cations, we calculate the differential capacitance of aqueous electrolyte interfaces from a numerical solution of the Poisson-Boltzmann equation, including steric interactions between the ions and an inhomogeneous dielectric profile. Preferential adsorption of the positive (negative) ion shifts the minimum of the differential capacitance to positive (negative) surface potential values. The trends are similar for the potential of zero charge; however, the potential of zero charge does not correspond to the minimum of the differential capacitance in the case of asymmetric ion adsorption, contrary to the assumption commonly used to determine the potential of zero charge. Our model can be used to obtain more accurate estimates of ion adsorption properties from differential capacitance or electrocapillary measurements. Asymmetric ion adsorption also affects the relative heights of the characteristic maxima in the differential capacitance curves as a function of the surface potential, but even for strong adsorption potentials the effect is small, making it difficult to reliably determine the adsorption properties from the peak heights.
Borchers, D L; Langrock, R
2015-12-01
We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
The transverse Poisson's ratio of composites.
NASA Technical Reports Server (NTRS)
Foye, R. L.
1972-01-01
An expression is developed that makes possible the prediction of Poisson's ratio for unidirectional composites with reference to any pair of orthogonal axes that are normal to the direction of the reinforcing fibers. This prediction appears to be a reasonable one in that it follows the trends of the finite element analysis and the bounding estimates, and has the correct limiting value for zero fiber content. It can only be expected to apply to composites containing stiff, circular, isotropic fibers bonded to a soft matrix material.
What is the cause of confidence inflation in the Life Events Inventory (LEI) paradigm?
Von Glahn, Nicholas R; Otani, Hajime; Migita, Mai; Langford, Sara J; Hillard, Erin E
2012-01-01
Briefly imagining, paraphrasing, or explaining an event causes people to increase their confidence that this event occurred during childhood-the imagination inflation effect. The mechanisms responsible for the effect were investigated with a new paradigm. In Experiment 1, event familiarity (defined as processing fluency) was varied by asking participants to rate each event once, three times, or five times. No inflation was found, indicating that familiarity does not account for the effect. In Experiment 2, richness of memory representation was manipulated by asking participants to generate zero, three, or six details. Confidence increased from the initial to the final rating in the three- and six-detail conditions, indicating that the effect is based on reality-monitoring errors. However, greater inflation in the three-detail condition than in the six-detail condition indicated that there is a boundary condition. These results were also consistent with an alternative hypothesis, the mental workload hypothesis.
The BRST complex of homological Poisson reduction
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin
2017-02-01
BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.
Crits-Christoph, Paul; Gallop, Robert; Sadicario, Jaclyn S; Markell, Hannah M; Calsyn, Donald A; Tang, Wan; He, Hua; Tu, Xin; Woody, George
2014-01-16
The objective of the current study was to examine predictors and moderators of response to two HIV sexual risk interventions of different content and duration for individuals in substance abuse treatment programs. Participants were recruited from community drug treatment programs participating in the National Institute on Drug Abuse Clinical Trials Network (CTN). Data were pooled from two parallel randomized controlled CTN studies (one with men and one with women) each examining the impact of a multi-session motivational and skills training program, in comparison to a single-session HIV education intervention, on the degree of reduction in unprotected sex from baseline to 3- and 6- month follow-ups. The findings were analyzed using a zero-inflated negative binomial (ZINB) model. Severity of drug use (p < .01), gender (p < .001), and age (p < .001) were significant main effect predictors of number of unprotected sexual occasions (USOs) at follow-up in the non-zero portion of the ZINB model (men, younger participants, and those with greater severity of drug/alcohol abuse have more USOs). Monogamous relationship status (p < .001) and race/ethnicity (p < .001) were significant predictors of having at least one USO vs. none (monogamous individuals and African Americans were more likely to have at least one USO). Significant moderators of intervention effectiveness included recent sex under the influence of drugs/alcohol (p < .01 in non-zero portion of model), duration of abuse of primary drug (p < .05 in non-zero portion of model), and Hispanic ethnicity (p < .01 in the zero portion, p < .05 in the non-zero portion of model). These predictor and moderator findings point to ways in which patients may be selected for the different HIV sexual risk reduction interventions and suggest potential avenues for further development of the interventions for increasing their effectiveness within certain subgroups.
Dynamic curvature sensing employing ionic-polymer-metal composite sensors
NASA Astrophysics Data System (ADS)
Bahramzadeh, Yousef; Shahinpoor, Mohsen
2011-09-01
A dynamic curvature sensor is presented based on ionic-polymer-metal composite (IPMC) for curvature monitoring of deployable/inflatable dynamic space structures. Monitoring the curvature variation is of high importance in various engineering structures including shape monitoring of deployable/inflatable space structures in which the structural boundaries undergo a dynamic deployment process. The high sensitivity of IPMCs to the applied deformations as well as its flexibility make IPMCs a promising candidate for sensing of dynamic curvature changes. Herein, we explore the dynamic response of an IPMC sensor strip with respect to controlled curvature deformations subjected to different forms of input functions. Using a specially designed experimental setup, the voltage recovery effect, phase delay, and rate dependency of the output voltage signal of an IPMC curvature sensor are analyzed. Experimental results show that the IPMC sensor maintains the linearity, sensitivity, and repeatability required for curvature sensing. Besides, in order to describe the dynamic phenomena such as the rate dependency of the IPMC sensor, a chemo-electro-mechanical model based on the Poisson-Nernst-Planck (PNP) equation for the kinetics of ion diffusion is presented. By solving the governing partial differential equations the frequency response of the IPMC sensor is derived. The physical model is able to describe the dynamic properties of the IPMC sensor and the dependency of the signal on rate of excitations.
Degravitation, inflation and the cosmological constant as an afterglow
NASA Astrophysics Data System (ADS)
Patil, Subodh P.
2009-01-01
In this report, we adopt the phenomenological approach of taking the degravitation paradigm seriously as a consistent modification of gravity in the IR, and investigate its consequences for various cosmological situations. We motivate degravitation — where Netwon's constant is promoted to a scale dependent filter function — as arising from either a small (resonant) mass for the graviton, or as an effect in semi-classical gravity. After addressing how the Bianchi identities are to be satisfied in such a set up, we turn our attention towards the cosmological consequences of degravitation. By considering the example filter function corresponding to a resonantly massive graviton (with a filter scale larger than the present horizon scale), we show that slow roll inflation, hybrid inflation and old inflation remain quantitatively unchanged. We also find that the degravitation mechanism inherits a memory of past energy densities in the present epoch in such a way that is likely significant for present cosmological evolution. For example, if the universe underwent inflation in the past due to it having tunneled out of some false vacuum, we find that degravitation implies a remnant `afterglow' cosmological constant, whose scale immediately afterwards is parametrically suppressed by the filter scale (L) in Planck units Λ ~ l2pl/L2. We discuss circumstances through which this scenario reasonably yields the presently observed value for Λ ~ O(10-120). We also find that in a universe still currently trapped in some false vacuum state, resonance graviton models of degravitation only degravitate initially Planck or GUT scale energy densities down to the presently observed value over timescales comparable to the filter scale. We argue that different functional forms for the filter function will yield similar conclusions. In this way, we argue that although the degravitation models we study have the potential to explain why the cosmological constant is not large in addition to why it is not zero, it does not satisfactorily address the co-incidence problem without additional tuning.
STS-45 crewmembers during zero gravity activities onboard KC-135 NASA 930
1991-08-21
S91-44453 (21 Aug 1991) --- The crew of STS-45 is already training for its March 1992 mission, including stints on the KC-135 zero-gravity-simulating aircraft. Shown with an inflatable globe are, clockwise from the top, C. Michael Foale, mission specialist; Dirk Frimout, payload specialist; Brian Duffy, pilot; Charles R. (Rick) Chappell, backup payload specialist; Charles F. Bolden, mission commander; Byron K. Lichtenberg, payload specialist; and Kathryn D. Sullivan, payload commander.
Physical activity and asthma: A longitudinal and multi-country study.
Russell, Melissa A; Janson, Christer; Real, Francisco Gómez; Johannessen, Ane; Waatevik, Marie; Benediktsdóttir, Bryndis; Holm, Mathias; Lindberg, Eva; Schlünssen, Vivi; Raza, Wasif; Dharmage, Shyamali C; Svanes, Cecilie
2017-11-01
To investigate the impact of physical activity on asthma in middle-aged adults, in one longitudinal analysis, and one multi-centre cross-sectional analysis. The Respiratory Health in Northern Europe (RHINE) is a population-based postal questionnaire cohort study. Physical activity, height and weight were self-reported in Bergen, Norway, at RHINE II (1999-2001) and all centres at RHINE III (2010-2012). A longitudinal analysis of Bergen data investigated the association of baseline physical activity with follow-up asthma, incident asthma and symptoms, using logistic and zero-inflated Poisson regression (n = 1782). A cross-sectional analysis of all RHINE III centres investigated the association of physical activity with concurrent asthma and symptoms (n = 13,542) using mixed-effects models. Body mass index (BMI) was categorised (<20, 20-24.99, 25-29.99, 30+ kg/m 2 ) and physical activity grouped by amount and frequency of lighter (no sweating/heavy breathing) and vigorous (sweating/heavy breathing) activity. In the Bergen longitudinal analysis, undertaking light activity 3+ times/week at baseline was associated with less follow-up asthma (odds ratio [OR] 0.44, 95% confidence interval [CI] 0.22, 0.89), whilst an effect from undertaking vigorous activity 3+ times/week was not detected (OR 1.22, 95% CI 0.44, 2.76). The associations were attenuated with BMI adjustment. In the all-centre cross-sectional analysis an interaction was found, with the association between physical activity and asthma varying across BMI categories. These findings suggest potential longer-term benefit from lighter physical activity, whilst improvement in asthma outcomes from increasing activity intensity was not evident. Additionally, it appears the benefit from physical activity may differ according to BMI.
Repercussions of mild diabetes on pregnancy in Wistar rats and on the fetal development
2010-01-01
Background Experimental models are necessary to elucidate diabetes pathophysiological mechanisms not yet understood in humans. Objective: To evaluate the repercussions of the mild diabetes, considering two methodologies, on the pregnancy of Wistar rats and on the development of their offspring. Methods In the 1st induction, female offspring were distributed into two experimental groups: Group streptozotocin (STZ, n = 67): received the β-cytotoxic agent (100 mg STZ/kg body weight - sc) on the 1st day of the life; and Non-diabetic Group (ND, n = 14): received the vehicle in a similar time period. In the adult life, the animals were mated. After a positive diagnosis of pregnancy (0), female rats from group STZ presenting with lower glycemia than 120 mg/dL received more 20 mg STZ/kg (ip) at day 7 of pregnancy (2nd induction). The female rats with glycemia higher than 120 mg/dL were discarded because they reproduced results already found in the literature. In the mornings of days 0, 7, 14 and 21 of the pregnancy glycemia was determined. At day 21 of pregnancy (at term), the female rats were anesthetized and killed for maternal reproductive performance and fetal development analysis. The data were analyzed using Student-Newman-Keuls, Chi-square and Zero-inflated Poisson (ZIP) Tests (p < 0.05). Results STZ rats presented increased rates of pre (STZ = 22.0%; ND = 5.1%) and post-implantation losses (STZ = 26.1%; ND = 5.7%), reduced rates of fetuses with appropriate weight for gestational age (STZ = 66%; ND = 93%) and reduced degree of development (ossification sites). Conclusion Mild diabetes led a negative impact on maternal reproductive performance and caused intrauterine growth restriction and impaired fetal development. PMID:20416073
Identification of burden hotspots and risk factors for cholera in India: An observational study
Sen Gupta, Sanjukta; Arora, Nisha; Khasnobis, Pradeep; Venkatesh, Srinivas; Sur, Dipika; Nair, Gopinath B.; Sack, David A.; Ganguly, Nirmal K.
2017-01-01
Background Even though cholera has existed for centuries and many parts of the country have sporadic, endemic and epidemic cholera, it is still an under-recognized health problem in India. A Cholera Expert Group in the country was established to gather evidence and to prepare a road map for control of cholera in India. This paper identifies cholera burden hotspots and factors associated with an increased risk of the disease. Methodology/Principle findings We acquired district level data on cholera case reports of 2010–2015 from the Integrated Disease Surveillance Program. Socioeconomic characteristics and coverage of water and sanitation was obtained from the 2011 census. Spatial analysis was performed to identify cholera hotspots, and a zero-inflated Poisson regression was employed to identify the factors associated with cholera and predicted case count in the district. 27,615 cholera cases were reported during the 6-year period. Twenty-four of 36 states of India reported cholera during these years, and 13 states were classified as endemic. Of 641 districts, 78 districts in 15 states were identified as “hotspots” based on the reported cases. On the other hand, 111 districts in nine states were identified as “hotspots” from model-based predicted number of cases. The risk for cholera in a district was negatively associated with the coverage of literate persons, households using treated water source and owning mobile telephone, and positively associated with the coverage of poor sanitation and drainage conditions and urbanization level in the district. Conclusions/Significance The study reaffirms that cholera continues to occur throughout a large part of India and identifies the burden hotspots and risk factors. Policymakers may use the findings of the article to develop a roadmap for prevention and control of cholera in India. PMID:28837645
Healthcare costs attributable to secondhand smoke exposure at home for U.S. adults.
Yao, Tingting; Sung, Hai-Yen; Wang, Yingning; Lightwood, James; Max, Wendy
2018-03-01
To estimate healthcare costs attributable to secondhand smoke (SHS) exposure at home among nonsmoking adults (18+) in the U.S. We analyzed data on nonsmoking adults (N=67,735) from the 2000, 2005, and 2010 (the latest available data on SHS exposure at home) U.S. National Health Interview Surveys. This study was conducted from 2015 to 2017. We examined hospital nights, home care visits, doctor visits, and emergency room (ER) visits. For each, we analyzed the association of SHS exposure at home with healthcare utilization with a Zero-Inflated Poisson regression model controlling for socio-demographic and other risk characteristics. Excess healthcare utilization attributable to SHS exposure at home was determined and multiplied by unit costs derived from the 2014 Medical Expenditures Panel Survey to determine annual SHS-attributable healthcare costs. SHS exposure at home was positively associated with hospital nights and ER visits, but was not statistically associated with home care visits and doctor visits. Exposed adults had 1.28 times more hospital nights and 1.16 times more ER visits than non-exposed adults. Annual SHS-attributable healthcare costs totaled $4.6 billion (including $3.8 billion for hospital nights and $0.8 billion for ER visits, 2014 dollars) in 2000, $2.1 billion (including $1.8 billion for hospital nights and $0.3 billion for ER visits) in 2005, and $1.9 billion (including $1.6 billion for hospital nights and $0.4 billion for ER visits) in 2010. SHS-attributable costs remain high, but have fallen over time. Tobacco control efforts are needed to further reduce SHS exposure at home and associated healthcare costs. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Ng, Chris Fook Sheng; Ueda, Kayo; Ono, Masaji; Nitta, Hiroshi; Takami, Akinori
2014-07-01
Despite rising concern on the impact of heat on human health, the risk of high summer temperature on heatstroke-related emergency dispatches is not well understood in Japan. A time-series study was conducted to examine the association between apparent temperature and daily heatstroke-related ambulance dispatches (HSAD) within the Kanto area of Japan. A total of 12,907 HSAD occurring from 2000 to 2009 in five major cities—Saitama, Chiba, Tokyo, Kawasaki, and Yokohama—were analyzed. Generalized additive models and zero-inflated Poisson regressions were used to estimate the effects of daily maximum three-hour apparent temperature (AT) on dispatch frequency from May to September, with adjustment for seasonality, long-term trend, weekends, and public holidays. Linear and non-linear exposure effects were considered. Effects on days when AT first exceeded its summer median were also investigated. City-specific estimates were combined using random effects meta-analyses. Exposure-response relationship was found to be fairly linear. Significant risk increase began from 21 °C with a combined relative risk (RR) of 1.22 (95 % confidence interval, 1.03-1.44), increasing to 1.49 (1.42-1.57) at peak AT. When linear exposure was assumed, combined RR was 1.43 (1.37-1.50) per degree Celsius increment. Overall association was significant the first few times when median AT was initially exceeded in a particular warm season. More than two-thirds of these initial hot days were in June, implying the harmful effect of initial warming as the season changed. Risk increase that began early at the fairly mild perceived temperature implies the need for early precaution.
Ng, Chris Fook Sheng; Ueda, Kayo; Ono, Masaji; Nitta, Hiroshi; Takami, Akinori
2014-07-01
Despite rising concern on the impact of heat on human health, the risk of high summer temperature on heatstroke-related emergency dispatches is not well understood in Japan. A time-series study was conducted to examine the association between apparent temperature and daily heatstroke-related ambulance dispatches (HSAD) within the Kanto area of Japan. A total of 12,907 HSAD occurring from 2000 to 2009 in five major cities-Saitama, Chiba, Tokyo, Kawasaki, and Yokohama-were analyzed. Generalized additive models and zero-inflated Poisson regressions were used to estimate the effects of daily maximum three-hour apparent temperature (AT) on dispatch frequency from May to September, with adjustment for seasonality, long-term trend, weekends, and public holidays. Linear and non-linear exposure effects were considered. Effects on days when AT first exceeded its summer median were also investigated. City-specific estimates were combined using random effects meta-analyses. Exposure-response relationship was found to be fairly linear. Significant risk increase began from 21 °C with a combined relative risk (RR) of 1.22 (95% confidence interval, 1.03-1.44), increasing to 1.49 (1.42-1.57) at peak AT. When linear exposure was assumed, combined RR was 1.43 (1.37-1.50) per degree Celsius increment. Overall association was significant the first few times when median AT was initially exceeded in a particular warm season. More than two-thirds of these initial hot days were in June, implying the harmful effect of initial warming as the season changed. Risk increase that began early at the fairly mild perceived temperature implies the need for early precaution.
Spatial and temporal patterns of dengue infections in Timor-Leste, 2005-2013.
Wangdi, Kinley; Clements, Archie C A; Du, Tai; Nery, Susana Vaz
2018-01-04
Dengue remains an important public health problem in Timor-Leste, with several major epidemics occurring over the last 10 years. The aim of this study was to identify dengue clusters at high geographical resolution and to determine the association between local environmental characteristics and the distribution and transmission of the disease. Notifications of dengue cases that occurred from January 2005 to December 2013 were obtained from the Ministry of Health, Timor-Leste. The population of each suco (the third-level administrative subdivision) was obtained from the Population and Housing Census 2010. Spatial autocorrelation in dengue incidence was explored using Moran's I statistic, Local Indicators of Spatial Association (LISA), and the Getis-Ord statistics. A multivariate, Zero-Inflated, Poisson (ZIP) regression model was developed with a conditional autoregressive (CAR) prior structure, and with posterior parameters estimated using Bayesian Markov chain Monte Carlo (MCMC) simulation with Gibbs sampling. The analysis used data from 3206 cases. Dengue incidence was highly seasonal with a large peak in January. Patients ≥ 14 years were found to be 74% [95% credible interval (CrI): 72-76%] less likely to be infected than those < 14 years, and females were 12% (95% CrI: 4-21%) more likely to suffer from dengue as compared to males. Dengue incidence increased by 0.7% (95% CrI: 0.6-0.8%) for a 1 °C increase in mean temperature; and 47% (95% CrI: 29-59%) for a 1 mm increase in precipitation. There was no significant residual spatial clustering after accounting for climate and demographic variables. Dengue incidence was highly seasonal and spatially clustered, with positive associations with temperature, precipitation and demographic factors. These factors explained the observed spatial heterogeneity of infection.
Identification of burden hotspots and risk factors for cholera in India: An observational study.
Ali, Mohammad; Sen Gupta, Sanjukta; Arora, Nisha; Khasnobis, Pradeep; Venkatesh, Srinivas; Sur, Dipika; Nair, Gopinath B; Sack, David A; Ganguly, Nirmal K
2017-01-01
Even though cholera has existed for centuries and many parts of the country have sporadic, endemic and epidemic cholera, it is still an under-recognized health problem in India. A Cholera Expert Group in the country was established to gather evidence and to prepare a road map for control of cholera in India. This paper identifies cholera burden hotspots and factors associated with an increased risk of the disease. We acquired district level data on cholera case reports of 2010-2015 from the Integrated Disease Surveillance Program. Socioeconomic characteristics and coverage of water and sanitation was obtained from the 2011 census. Spatial analysis was performed to identify cholera hotspots, and a zero-inflated Poisson regression was employed to identify the factors associated with cholera and predicted case count in the district. 27,615 cholera cases were reported during the 6-year period. Twenty-four of 36 states of India reported cholera during these years, and 13 states were classified as endemic. Of 641 districts, 78 districts in 15 states were identified as "hotspots" based on the reported cases. On the other hand, 111 districts in nine states were identified as "hotspots" from model-based predicted number of cases. The risk for cholera in a district was negatively associated with the coverage of literate persons, households using treated water source and owning mobile telephone, and positively associated with the coverage of poor sanitation and drainage conditions and urbanization level in the district. The study reaffirms that cholera continues to occur throughout a large part of India and identifies the burden hotspots and risk factors. Policymakers may use the findings of the article to develop a roadmap for prevention and control of cholera in India.
Buhi, Eric R.; Baldwin, Julie; Chen, Henian; Johnson, Ayesha; Lynn, Vickie; Glueckauf, Robert
2014-01-01
Abstract Introduction: Expanded access to efficacious interventions is needed for women living with human immunodeficiency virus (WLH) in the United States. Availability of “prevention with (human immunodeficiency virus [HIV)] positives” interventions in rural/remote and low HIV prevalence areas remains limited, leaving WLH in these communities few options for receiving effective behavioral interventions such as Healthy Relationships (HR). Offering such programs via videoconferencing groups (VGs) may expand access. This analysis tests the effectiveness of HR-VG (versus wait-list control) for reducing sexual risk behavior among WLH and explores intervention satisfaction. Subjects and Methods: In this randomized controlled trial unprotected vaginal/anal sex occasions over the prior 3 months reported at the 6-month follow-up were compared across randomization groups through zero-inflated Poisson regression modeling, controlling for unprotected sex at baseline. Seventy-one WLH were randomized and completed the baseline assessment (n=36 intervention and n=35 control); 59 (83% in each group) had follow-up data. Results: Among those who engaged in unprotected sex at 6-month follow-up, intervention participants had approximately seven fewer unprotected occasions than control participants (95% confidence interval 5.43–7.43). Intervention participants reported high levels of satisfaction with HR-VG; 84% reported being “very satisfied” overall. Conclusions: This study found promising evidence for effective dissemination of HIV risk reduction interventions via VGs. Important next steps will be to determine whether VGs are effective with other subpopulations of people living with HIV (i.e., men and non-English speakers) and to assess cost-effectiveness. Possibilities for using VGs to expand access to other psychosocial and behavioral interventions and reduce stigma are discussed. PMID:24237482
Horvitz-Lennon, Marcela; Zhou, Dongli; Normand, Sharon-Lise T.; Alegría, Margarita; Thompson, Wes K.
2013-01-01
Objective Case management–based interventions aimed at improving quality of care have the potential to narrow racial and ethnic disparities among people with chronic illnesses. The aim of this study was to assess the equity effects of assertive community treatment (ACT), an evidence-based case management intervention, among homeless adults with severe mental illness. Methods This study used baseline, three-, and 12-month data for 6,829 black, Latino, and white adults who received ACT services through the ACCESS study (Access to Community Care and Effective Services and Support). Zero-inflated Poisson random regression models were used to estimate the adjusted probability of use of outpatient psychiatric services and, among service users, the intensity of use. Odds ratios and rate ratios (RRs) were computed to assess disparities at baseline and over time. Results No disparities were found in probability of use at baseline or over time. Compared with white users, baseline intensity of use was lower for black users (RR=.89; 95% confidence interval [CI]=.83–.96) and Latino users (RR=.65; CI=.52–.81]). Intensity did not change over time for whites, but it did for black and Latino users. Intensity increased for blacks between baseline and three months (RR=1.11, CI=1.06–1.17]) and baseline and 12 months (RR=1.17, CI=1.11–1.22]). Intensity of use dropped for Latinos between baseline and three months (RR=.83, CI=.70–.98). Conclusions Receipt of ACT was associated with a reduction in service use disparities for blacks but not for Latinos. Findings suggest that ACT’s equity effects differ depending on race-ethnicity. PMID:21632726
Guan, Weihua; Clay, Sandra J; Sloan, Gloria J; Pretlow, Lester G
2018-06-24
Several studies worldwide have demonstrated significant relationships between meteorological parameters and stroke events. However, authors often reported discordant effects of both barometric pressure and air temperature on stroke occurrence. The present study investigated whether there was an association between weather parameters (barometric pressure and temperature) and ischemic stroke hospitalization. The aim of the study was to find out whether daily barometric pressure may be used as a prognostic variable to evaluate the workload change of a neurological intensive care unit. We conducted a retrospective review study in which we collected the independent (barometric pressure and temperature) and dependent variables (stroke hospitalization) every 24 h for the periods 10/1/2016-4/30/2017 at Augusta University Medical Center of Augusta, GA. We analyzed the data with zero-inflated Poisson model to assess the relationship between the barometric pressure, temperature, and daily stroke hospitalization. The results showed that there was a significantly correlation between daily barometric pressure variation and daily stroke hospitalization, especially on elder male patients (≥ 65). Stroke events were more likely to occur in the patients with risk factors than in those without risk factors when exposed to barometric pressure and temperature changes. Decreased barometric pressure and increased temperature were associated with increased daily stroke hospitalization. Furthermore, there was a potential delayed effect of increased stroke events after cold temperature exposure. Barometric pressure and temperature changes over the preceding 24 h are associated with daily stroke hospitalization. These findings may enhance our understanding of relationship between stroke and weather and maybe used in the development of public health strategies to minimize the weather-related stroke risk.
Paramedic-Initiated Home Care Referrals and Use of Home Care and Emergency Medical Services.
Verma, Amol A; Klich, John; Thurston, Adam; Scantlebury, Jordan; Kiss, Alex; Seddon, Gayle; Sinha, Samir K
2018-01-01
We examined the association between paramedic-initiated home care referrals and utilization of home care, 9-1-1, and Emergency Department (ED) services. This was a retrospective cohort study of individuals who received a paramedic-initiated home care referral after a 9-1-1 call between January 1, 2011 and December 31, 2012 in Toronto, Ontario, Canada. Home care, 9-1-1, and ED utilization were compared in the 6 months before and after home care referral. Nonparametric longitudinal regression was performed to assess changes in hours of home care service use and zero-inflated Poisson regression was performed to assess changes in the number of 9-1-1 calls and ambulance transports to ED. During the 24-month study period, 2,382 individuals received a paramedic-initiated home care referral. After excluding individuals who died, were hospitalized, or were admitted to a nursing home, the final study cohort was 1,851. The proportion of the study population receiving home care services increased from 18.2% to 42.5% after referral, representing 450 additional people receiving services. In longitudinal regression analysis, there was an increase of 17.4 hours in total services per person in the six months after referral (95% CI: 1.7-33.1, p = 0.03). The mean number of 9-1-1 calls per person was 1.44 (SD 9.58) before home care referral and 1.20 (SD 7.04) after home care referral in the overall study cohort. This represented a 10% reduction in 9-1-1 calls (95% CI: 7-13%, p < 0.001) in Poisson regression analysis. The mean number of ambulance transports to ED per person was 0.91 (SD 8.90) before home care referral and 0.79 (SD 6.27) after home care referral, representing a 7% reduction (95% CI: 3-11%, p < 0.001) in Poisson regression analysis. When only the participants with complete paramedic and home care records were included in the analysis, the reductions in 9-1-1 calls and ambulance transports to ED were attenuated but remained statistically significant. Paramedic-initiated home care referrals in Toronto were associated with improved access to and use of home care services and may have been associated with reduced 9-1-1 calls and ambulance transports to ED.
Kim, Dae-Hwan; Ramjan, Lucie M; Mak, Kwok-Kei
2016-01-01
Traffic safety is a significant public health challenge, and vehicle crashes account for the majority of injuries. This study aims to identify whether drivers' characteristics and past traffic violations may predict vehicle crashes in Korea. A total of 500,000 drivers were randomly selected from the 11.6 million driver records of the Ministry of Land, Transport and Maritime Affairs in Korea. Records of traffic crashes were obtained from the archives of the Korea Insurance Development Institute. After matching the past violation history for the period 2004-2005 with the number of crashes in year 2006, a total of 488,139 observations were used for the analysis. Zero-inflated negative binomial model was used to determine the incident risk ratio (IRR) of vehicle crashes by past violations of individual drivers. The included covariates were driver's age, gender, district of residence, vehicle choice, and driving experience. Drivers violating (1) a hit-and-run or drunk driving regulation at least once and (2) a signal, central line, or speed regulation more than once had a higher risk of a vehicle crash with respective IRRs of 1.06 and 1.15. Furthermore, female gender, a younger age, fewer years of driving experience, and middle-sized vehicles were all significantly associated with a higher likelihood of vehicle crashes. Drivers' demographic characteristics and past traffic violations could predict vehicle crashes in Korea. Greater resources should be assigned to the provision of traffic safety education programs for the high-risk driver groups.
Ghost instabilities of cosmological models with vector fields nonminimally coupled to the curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Himmetoglu, Burak; Peloso, Marco; Contaldi, Carlo R.
2009-12-15
We prove that many cosmological models characterized by vectors nonminimally coupled to the curvature (such as the Turner-Widrow mechanism for the production of magnetic fields during inflation, and models of vector inflation or vector curvaton) contain ghosts. The ghosts are associated with the longitudinal vector polarization present in these models and are found from studying the sign of the eigenvalues of the kinetic matrix for the physical perturbations. Ghosts introduce two main problems: (1) they make the theories ill defined at the quantum level in the high energy/subhorizon regime (and create serious problems for finding a well-behaved UV completion), andmore » (2) they create an instability already at the linearized level. This happens because the eigenvalue corresponding to the ghost crosses zero during the cosmological evolution. At this point the linearized equations for the perturbations become singular (we show that this happens for all the models mentioned above). We explicitly solve the equations in the simplest cases of a vector without a vacuum expectation value in a Friedmann-Robertson-Walker geometry, and of a vector with a vacuum expectation value plus a cosmological constant, and we show that indeed the solutions of the linearized equations diverge when these equations become singular.« less
Dental erosion prevalence and associated risk indicators among preschool children in Athens, Greece.
Mantonanaki, Magdalini; Koletsi-Kounari, Haroula; Mamai-Homata, Eleni; Papaioannou, William
2013-03-01
The aims of the study were to investigate dental erosion prevalence, distribution and severity in Greek preschool children attending public kindergartens in the prefecture of Attica, Greece and to determine the effect of dental caries, oral hygiene level, socio-economic factors, dental behavior, erosion related medication and chronic illness. A random and stratified sample of 605 Greek preschool children was clinically examined for dental erosion using the Basic Erosive Wear Examination Index (ΒΕWE). Dental caries (dmfs) and Simplified Debris Index were also recorded. The data concerning possible risk indicators were derived by a questionnaire. Zero-inflated Poisson regression was generated to test the predictive effects of the independent variables on dental erosion. The prevalence of dental erosion was 78.8 %, and the mean and SE of BEWE index was 3.64 ± 0.15. High monthly family income was positively related to ΒΕWE cumulative scores [RR = 1.204 (1.016-1.427)], while high maternal education level [RR = 0.872 (0.771-0.986)] and poor oral hygiene level [DI-s, RR = 0.584 (0.450-0.756)] showed a negative association. Dental erosion is a common oral disease in Greek preschool children in Attica, related to oral hygiene and socio-economic factors. Programs aimed at erosion prevention should begin at an early age for all children.
THE ROLE OF IMMIGRATION AGE ON ALCOHOL AND DRUG USE AMONG BORDER AND NON-BORDER MEXICAN AMERICANS
Reingle, Jennifer M.; Caetano, Raul; Mills, Britain A.; Vaeth, Patrice A. C.
2014-01-01
Background To determine the age of immigration at which the marked increase in risk for alcohol- and drug use problems in adulthood is observed among Mexican American adults residing in two distinct contexts: the U.S.-Mexico border, and cities not proximal to the border. Methods We used two samples of Mexican American adults; specifically, 1,307 who resided along the U.S.-Mexico border, and 1,288 non-border adults who were interviewed as a part of the 2006 Hispanic Americans Baseline Alcohol Survey study. Survey logistic and Zero-Inflated Poisson methods were used to examine how immigration age during adolescence is related to alcohol and drug use behavior in adulthood. Results We found that participants who immigrate to the U.S. prior to age 12 have qualitatively different alcohol- and drug-related outcomes compared to those who immigrate later in life. Adults who immigrated at younger ages have alcohol and drug use patterns similar to those who were U.S.-born. Similarly, adults who immigrated at younger ages and live along the U.S.-Mexico border are at greater risk for alcohol and drug use than those who live in non-border contexts. Conclusions Immigration from Mexico to the U.S. before age 12 results in alcohol and drug-related behavior that mirrors the behavior of U.S.-born residents. PMID:24846850
An episode of reinflation of the Long Valley Caldera, eastern California: 1989-1991
Langbein, J.; Hill, D.P.; Parker, T.N.; Wilkinson, S.K.
1993-01-01
Following the episodes of inflation of the resurgent dome associated with the May 1980 earthquake sequence (four M 6 earthquakes) and the January 1983 earthquake swarm (two M 5.2 events), 7 years of frequently repeated two-color geodimeter measurements spanning the Long Valley caldera document gradually decreasing extensional strain rates from 5 ppm/yr in mid-1983, when the measurements began, to near zero in mid-1989. Early October 1989 marked a change in activity when measurements of the two-color geodimeter network showed a significant increase in extensional strain rate (9 ppm/yr) across the caldera. The seismic activity began exceeding 10 M ??? 1..2 per week in early December 1989 and rapidly increased to a sustained level of tens of M ??? 1.2 per week with bursts having hundreds of events per day. The episode of inflation can be modeled by a single Mogi point source located about 7 km beneath the center of the resurgent dome. -from Authors
NASA Astrophysics Data System (ADS)
Frey, Elaine F.
Even though environmental policy can greatly affect the path of technology diffusion, the economics literature contains limited empirical evidence of this relationship. My research will contribute to the available evidence by providing insight into the technology adoption decisions of electric generating firms. Since policies are often evaluated based on the incentives they provide to promote adoption of new technologies, it is important that policy makers understand the relationship between technological diffusion and regulation structure to make informed decisions. Lessons learned from this study can be used to guide future policies such as those directed to mitigate climate change. I first explore the diffusion of scrubbers, a sulfur dioxide (SO 2) abatement technology, in response to federal market-based regulations and state command-and-control regulations. I develop a simple theoretical model to describe the adoption decisions of scrubbers and use a survival model to empirically test the theoretical model. I find that power plants with strict command-and-control regulations have a high probability of installing a scrubber. These findings suggest that although market-based regulations have encouraged diffusion, many scrubbers have been installed because of state regulatory pressure. Although tradable permit systems are thought to give firms more flexibility in choosing abatement technologies, I show that interactions between a permit system and pre-existing command-and-control regulations can limit that flexibility. In a separate analysis, I explore the diffusion of combined cycle (CC) generating units, which are natural gas-fired generating units that are cleaner and more efficient than alternative generating units. I model the decision to consider adoption of a CC generating unit and the extent to which the technology is adopted in response to environmental regulations imposed on new sources of pollutants. To accomplish this, I use a zero-inflated Poisson model and focus on both the decision to adopt a CC unit at an existing power plant as well as the firm-level decision to adopt a CC unit in either a new or an existing power plant. Evidence from this empirical investigation shows that environmental regulation has a significant effect on both the decision to consider adoption as well as the extent of adoption.
NASA Astrophysics Data System (ADS)
Hamilton, Andrew J. S.
2017-10-01
Numerical evidence is presented that the Poisson-Israel mass inflation instability at the inner horizon of an accreting, rotating black hole is generically followed by Belinskii-Khalatnikov-Lifshitz oscillatory collapse to a spacelike singularity. The computation involves following all 6 degrees of freedom of the gravitational field. To simplify the problem, the computation takes as initial conditions the conformally separable solutions of Andrew J. S. Hamilton and Gavin Polhemus [Interior structure of rotating black holes. I. Concise derivation, Phys. Rev. D 84, 124055 (2011), 10.1103/PhysRevD.84.124055] and Andrew J. S. Hamilton [Interior structure of rotating black holes. II. Uncharged black holes, Phys. Rev. D 84, 124056 (2011), 10.1103/PhysRevD.84.124056] just above the inner horizon of a slowly accreting, rotating black hole and integrates the equations inward along single latitudes.
Density of wild prey modulates lynx kill rates on free-ranging domestic sheep.
Odden, John; Nilsen, Erlend B; Linnell, John D C
2013-01-01
Understanding the factors shaping the dynamics of carnivore-livestock conflicts is vital to facilitate large carnivore conservation in multi-use landscapes. We investigated how the density of their main wild prey, roe deer Capreolus capreolus, modulates individual Eurasian lynx Lynx lynx kill rates on free-ranging domestic sheep Ovis aries across a range of sheep and roe deer densities. Lynx kill rates on free-ranging domestic sheep were collected in south-eastern Norway from 1995 to 2011 along a gradient of different livestock and wild prey densities using VHF and GPS telemetry. We used zero-inflated negative binomial (ZINB) models including lynx sex, sheep density and an index of roe deer density as explanatory variables to model observed kill rates on sheep, and ranked the models based on their AICc values. The model including the effects of lynx sex and sheep density in the zero-inflation model and the effect of lynx sex and roe deer density in the negative binomial part received most support. Irrespective of sheep density and sex, we found the lowest sheep kill rates in areas with high densities of roe deer. As roe deer density decreased, males killed sheep at higher rates, and this pattern held for both high and low sheep densities. Similarly, females killed sheep at higher rates in areas with high densities of sheep and low densities of roe deer. However, when sheep densities were low females rarely killed sheep irrespective of roe deer density. Our quantification of depredation rates can be the first step towards establishing fairer compensation systems based on more accurate and area specific estimation of losses. This study demonstrates how we can use ecological theory to predict where losses of sheep will be greatest, and can be used to identify areas where mitigation measures are most likely to be needed.
Density of Wild Prey Modulates Lynx Kill Rates on Free-Ranging Domestic Sheep
Odden, John; Nilsen, Erlend B.; Linnell, John D. C.
2013-01-01
Understanding the factors shaping the dynamics of carnivore–livestock conflicts is vital to facilitate large carnivore conservation in multi-use landscapes. We investigated how the density of their main wild prey, roe deer Capreolus capreolus, modulates individual Eurasian lynx Lynx lynx kill rates on free-ranging domestic sheep Ovis aries across a range of sheep and roe deer densities. Lynx kill rates on free-ranging domestic sheep were collected in south-eastern Norway from 1995 to 2011 along a gradient of different livestock and wild prey densities using VHF and GPS telemetry. We used zero-inflated negative binomial (ZINB) models including lynx sex, sheep density and an index of roe deer density as explanatory variables to model observed kill rates on sheep, and ranked the models based on their AICc values. The model including the effects of lynx sex and sheep density in the zero-inflation model and the effect of lynx sex and roe deer density in the negative binomial part received most support. Irrespective of sheep density and sex, we found the lowest sheep kill rates in areas with high densities of roe deer. As roe deer density decreased, males killed sheep at higher rates, and this pattern held for both high and low sheep densities. Similarly, females killed sheep at higher rates in areas with high densities of sheep and low densities of roe deer. However, when sheep densities were low females rarely killed sheep irrespective of roe deer density. Our quantification of depredation rates can be the first step towards establishing fairer compensation systems based on more accurate and area specific estimation of losses. This study demonstrates how we can use ecological theory to predict where losses of sheep will be greatest, and can be used to identify areas where mitigation measures are most likely to be needed. PMID:24278123
Are star formation rates of galaxies bimodal?
NASA Astrophysics Data System (ADS)
Feldmann, Robert
2017-09-01
Star formation rate (SFR) distributions of galaxies are often assumed to be bimodal with modes corresponding to star-forming and quiescent galaxies, respectively. Both classes of galaxies are typically studied separately, and SFR distributions of star-forming galaxies are commonly modelled as lognormals. Using both observational data and results from numerical simulations, I argue that this division into star-forming and quiescent galaxies is unnecessary from a theoretical point of view and that the SFR distributions of the whole population can be well fitted by zero-inflated negative binomial distributions. This family of distributions has three parameters that determine the average SFR of the galaxies in the sample, the scatter relative to the star-forming sequence and the fraction of galaxies with zero SFRs, respectively. The proposed distributions naturally account for (I) the discrete nature of star formation, (II) the presence of 'dead' galaxies with zero SFRs and (III) asymmetric scatter. Excluding 'dead' galaxies, the distribution of log SFR is unimodal with a peak at the star-forming sequence and an extended tail towards low SFRs. However, uncertainties and biases in the SFR measurements can create the appearance of a bimodal distribution.
The limits of hamiltonian structures in three-dimensional elasticity, shells, and rods
NASA Astrophysics Data System (ADS)
Ge, Z.; Kruse, H. P.; Marsden, J. E.
1996-01-01
This paper uses Hamiltonian structures to study the problem of the limit of three-dimensional (3D) elastic models to shell and rod models. In the case of shells, we show that the Hamiltonian structure for a three-dimensional elastic body converges, in a sense made precise, to that for a shell model described by a one-director Cosserat surface as the thickness goes to zero. We study limiting procedures that give rise to unconstrained as well as constrained Cosserat director models. The case of a rod is also considered and similar convergence results are established, with the limiting model being a geometrically exact director rod model (in the framework developed by Antman, Simo, and coworkers). The resulting model may or may not have constraints, depending on the nature of the constitutive relations and their behavior under the limiting procedure. The closeness of Hamiltonian structures is measured by the closeness of Poisson brackets on certain classes of functions, as well as the Hamiltonians. This provides one way of justifying the dynamic one-director model for shells. Another way of stating the convergence result is that there is an almost-Poisson embedding from the phase space of the shell to the phase space of the 3D elastic body, which implies that, in the sense of Hamiltonian structures, the dynamics of the elastic body is close to that of the shell. The constitutive equations of the 3D model and their behavior as the thickness tends to zero dictates whether the limiting 2D model is a constrained or an unconstrained director model. We apply our theory in the specific case of a 3D Saint Venant-Kirchhoff material and derive the corresponding limiting shell and rod theories. The limiting shell model is an interesting Kirchhoff-like shell model in which the stored energy function is explicitly derived in terms of the shell curvature. For rods, one gets (with an additional inextensibility constraint) a one-director Kirchhoff elastic rod model, which reduces to the well-known Euler elastica if one adds an additional single constraint that the director lines up with the Frenet frame.
Peebles, P. J. E.
1998-01-01
It is argued that within the standard Big Bang cosmological model the bulk of the mass of the luminous parts of the large galaxies likely had been assembled by redshift z ∼ 10. Galaxy assembly this early would be difficult to fit in the widely discussed adiabatic cold dark matter model for structure formation, but it could agree with an isocurvature version in which the cold dark matter is the remnant of a massive scalar field frozen (or squeezed) from quantum fluctuations during inflation. The squeezed field fluctuations would be Gaussian with zero mean, and the distribution of the field mass therefore would be the square of a random Gaussian process. This offers a possibly interesting new direction for the numerical exploration of models for cosmic structure formation. PMID:9419326
Inflation and dark energy from the Brans-Dicke theory
NASA Astrophysics Data System (ADS)
Artymowski, Michał; Lalak, Zygmunt; Lewicki, Marek
2015-06-01
We consider the Brans-Dicke theory motivated by the f(R) = R + α Rn - β R2-n model to obtain a stable minimum of the Einstein frame scalar potential of the Brans-Dicke field. As a result we have obtained an inflationary scalar potential with non-zero value of residual vacuum energy, which may be a source of dark energy. In addition we discuss the probability of quantum tunnelling from the minimum of the potential. Our results can be easily consistent with PLANCK or BICEP2 data for appropriate choices of the value of n and ω.
Rowe, Christopher; Santos, Glenn-Milo; Vittinghoff, Eric; Wheeler, Eliza; Davidson, Peter; Coffin, Philip O
2015-08-01
To describe characteristics of participants and overdose reversals associated with a community-based naloxone distribution program and identify predictors of obtaining naloxone refills and using naloxone for overdose reversal. Bivariate statistical tests were used to compare characteristics of participants who obtained refills and reported overdose reversals versus those who did not. We fitted multiple logistic regression models to identify predictors of refills and reversals; zero-inflated multiple Poisson regression models were used to identify predictors of number of refills and reversals. San Francisco, California, USA. Naloxone program participants registered and reversals reported from 2010 to 2013. Baseline characteristics of participants and reported characteristics of reversals. A total of 2500 participants were registered and 702 reversals were reported from 2010 to 2013. Participants who had witnessed an overdose [adjusted odds ratio (AOR)=2.02, 95% confidence interval (CI)= 1.53-2.66; AOR = 2.73, 95% CI = 1.73-4.30] or used heroin (AOR = 1.85, 95% CI = 1.44-2.37; AOR = 2.19, 95% CI = 1.54-3.13) or methamphetamine (AOR=1.71, 95% CI=1.37-2.15; AOR=1.61, 95% CI=1.18-2.19) had higher odds of obtaining a refill and reporting a reversal, respectively. African American (AOR = 0.63, 95% CI = 0.45-0.88) and Latino (AOR = 0.65, 95% CI = 0.43-1.00) participants had lower odds of obtaining a naloxone refill, whereas Latino participants who obtained at least one refill reported a higher number of refills [incidence rate ratio (IRR) = 1.33 (1.05-1.69)]. Community naloxone distribution programs are capable of reaching sizeable populations of high-risk individuals and facilitating large numbers of overdose reversals. Community members most likely to engage with a naloxone program and use naloxone to reverse an overdose are active drug users. © 2015 Society for the Study of Addiction.
Sousa, Renata M; Ferri, Cleusa P; Acosta, Daisy; Albanese, Emiliano; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Jotheeswaran, AT; Rodriguez, Juan J Llibre; Pichardo, Guillermina Rodriguez; Rodriguez, Marina Calvo; Salas, Aquiles; Sosa, Ana Luisa; Williams, Joseph; Zuniga, Tirso; Prince, Martin
2009-01-01
Summary Background Disability in elderly people in countries with low and middle incomes is little studied; according to Global Burden of Disease estimates, visual impairment is the leading contributor to years lived with disability in this population. We aimed to assess the contribution of physical, mental, and cognitive chronic diseases to disability, and the extent to which sociodemographic and health characteristics account for geographical variation in disability. Methods We undertook cross-sectional surveys of residents aged older than 65 years (n=15 022) in 11 sites in seven countries with low and middle incomes (China, India, Cuba, Dominican Republic, Venezuela, Mexico, and Peru). Disability was assessed with the 12-item WHO disability assessment schedule 2.0. Dementia, depression, hypertension, and chronic obstructive pulmonary disease were ascertained by clinical assessment; diabetes, stroke, and heart disease by self-reported diagnosis; and sensory, gastrointestinal, skin, limb, and arthritic disorders by self-reported impairment. Independent contributions to disability scores were assessed by zero-inflated negative binomial regression and Poisson regression to generate population-attributable prevalence fractions (PAPF). Findings In regions other than rural India and Venezuela, dementia made the largest contribution to disability (median PAPF 25·1% [IQR 19·2–43·6]). Other substantial contributors were stroke (11·4% [1·8–21·4]), limb impairment (10·5% [5·7–33·8]), arthritis (9·9% [3·2–34·8]), depression (8·3% [0·5–23·0]), eyesight problems (6·8% [1·7–17·6]), and gastrointestinal impairments (6·5% [0·3–23·1]). Associations with chronic diseases accounted for around two-thirds of prevalent disability. When zero inflation was taken into account, between-site differences in disability scores were largely attributable to compositional differences in health and sociodemographic characteristics. Interpretation On the basis of empirical research, dementia, not blindness, is overwhelmingly the most important independent contributor to disability for elderly people in countries with low and middle incomes. Chronic diseases of the brain and mind deserve increased prioritisation. Besides disability, they lead to dependency and present stressful, complex, long-term challenges to carers. Societal costs are enormous. Funding Wellcome Trust; WHO; US Alzheimer's Association; Fondo Nacional de Ciencia Y Tecnologia, Consejo de Desarrollo Cientifico Y Humanistico, Universidad Central de Venezuela. PMID:19944863
Poisson-Like Spiking in Circuits with Probabilistic Synapses
Moreno-Bote, Rubén
2014-01-01
Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705
Inverse Jacobi multiplier as a link between conservative systems and Poisson structures
NASA Astrophysics Data System (ADS)
García, Isaac A.; Hernández-Bermejo, Benito
2017-08-01
Some aspects of the relationship between conservativeness of a dynamical system (namely the preservation of a finite measure) and the existence of a Poisson structure for that system are analyzed. From the local point of view, due to the flow-box theorem we restrict ourselves to neighborhoods of singularities. In this sense, we characterize Poisson structures around the typical zero-Hopf singularity in dimension 3 under the assumption of having a local analytic first integral with non-vanishing first jet by connecting with the classical Poincaré center problem. From the global point of view, we connect the property of being strictly conservative (the invariant measure must be positive) with the existence of a Poisson structure depending on the phase space dimension. Finally, weak conservativeness in dimension two is introduced by the extension of inverse Jacobi multipliers as weak solutions of its defining partial differential equation and some of its applications are developed. Examples including Lotka-Volterra systems, quadratic isochronous centers, and non-smooth oscillators are provided.
NASA Technical Reports Server (NTRS)
Kolb, Edward W.
1991-01-01
In the original proposal, inflation occurred in the process of a strongly first-order phase transition. This model was soon demonstrated to be fatally flawed. Subsequent models for inflation involved phase transitions that were second-order, or perhaps weakly first-order; some even involved no phase transition at all. Recently the possibility of inflation during a strongly first-order phase transition has been revived. In this talk I will discuss some models for first-order inflation, and emphasize unique signatures that result if inflation is realized in a first-order transition. Before discussing first-order inflation, I will briefly review some of the history of inflation to demonstrate how first-order inflation differs from other models.
Species abundance in a forest community in South China: A case of poisson lognormal distribution
Yin, Z.-Y.; Ren, H.; Zhang, Q.-M.; Peng, S.-L.; Guo, Q.-F.; Zhou, G.-Y.
2005-01-01
Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m??20 m, 5 m??5 m, and 1 m??1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (?? and ??) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the ?? and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/?? should be an alternative measure of diversity.
NASA Astrophysics Data System (ADS)
Fairbairn, Malcolm; Markkanen, Tommi; Rodriguez Roman, David
2018-04-01
We consider the effect of the Gibbons-Hawking radiation on the inflaton in the situation where it is coupled to a large number of spectator fields. We argue that this will lead to two important effects - a thermal contribution to the potential and a gradual change in parameters in the Lagrangian which results from thermodynamic and energy conservation arguments. We present a scenario of hilltop inflation where the field starts trapped at the origin before slowly experiencing a phase transition during which the field extremely slowly moves towards its zero temperature expectation value. We show that it is possible to obtain enough e-folds of expansion as well as the correct spectrum of perturbations without hugely fine-tuned parameters in the potential (albeit with many spectator fields). We also comment on how initial conditions for inflation can arise naturally in this situation.
Earthquake number forecasts testing
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.
2017-10-01
We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness and kurtosis both tend to zero for large earthquake rates: for the Gaussian law, these values are identically zero. A calculation of the NBD skewness and kurtosis levels based on the values of the first two statistical moments of the distribution, shows rapid increase of these upper moments levels. However, the observed catalogue values of skewness and kurtosis are rising even faster. This means that for small time intervals, the earthquake number distribution is even more heavy-tailed than the NBD predicts. Therefore for small time intervals, we propose using empirical number distributions appropriately smoothed for testing forecasted earthquake numbers.
Geist, Eric L.
2014-01-01
Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.
Dark zone in the centre of the Arago-Poisson diffraction spot of a helical laser beam
NASA Astrophysics Data System (ADS)
Emile, O.; Voisin, A.; Niemiec, R.; Viaris de Lesegno, B.; Pruvost, L.; Ropars, G.; Emile, J.; Brousseau, C.
2013-03-01
We report on the diffraction of non-zero Laguerre Gaussian laser beams by an opaque disk. We observe a tiny circular dark zone at the centre of the usual Arago-Poisson diffraction bright spot. For such non-diffracting dark hollow beams, we have measured diameters as small as 20 μm on distances of the order of ten metres, without focalization. Diameters depend on the diffracting object size and on the topological charge of the input Laguerre Gaussian beam. These results are in good agreement with theoretical considerations. Potential applications are then discussed.
DQE as detection probability of the radiation detectors
NASA Astrophysics Data System (ADS)
Zanella, Giovanni
2008-02-01
In this paper it is shown that quantum efficiency (DQE), as commonly defined for imaging detectors, can be extended to all radiation detectors with the meaning of detection probability, if Poisson statistics applies. This unified approach is possible in time-domain at zero spatial-frequency.
NASA Astrophysics Data System (ADS)
Melia, F.; López-Corredoira, M.
2018-03-01
Aim. The lack of large-angle correlations in the fluctuations of the cosmic microwave background (CMB) conflicts with predictions of slow-roll inflation. But while probabilities (≲0.24%) for the missing correlations disfavour the conventional picture at ≳3σ, factors not associated with the model itself may be contributing to the tension. Here we aim to show that the absence of large-angle correlations is best explained with the introduction of a non-zero minimum wave number kmin for the fluctuation power spectrum P(k). Methods: We assumed that quantum fluctuations were generated in the early Universe with a well-defined power spectrum P(k), although with a cut-off kmin ≠ 0. We then re-calculated the angular correlation function of the CMB and compared it with Planck observations. Results: The Planck 2013 data rule out a zero kmin at a confidence level exceeding 8σ. Whereas purely slow-roll inflation would have stretched all fluctuations beyond the horizon, producing a P(k) with kmin = 0 - and therefore strong correlations at all angles - a kmin ≠ 0 would signal the presence of a maximum wavelength at the time (tdec) of decoupling. This argues against the basic inflationary paradigm, and perhaps even suggests non-inflationary alternatives, for the origin and growth of perturbations in the early Universe. In at least one competing cosmology, the Rh = ct universe, the inferred kmin corresponds to the gravitational radius at tdec.
First-order inflation. [in cosmology
NASA Technical Reports Server (NTRS)
Kolb, Edward W.
1991-01-01
In the original proposal, inflation occurred in the process of a strongly first-order phase transition. This model was soon demonstrated to be fatally flawed. Subsequent models for inflation involved phase transitions that were second-order, or perhaps weakly first-order; some even involved no phase transition at all. Recently the possibility of inflation during a strongly first-order phase transition has been revived. In this paper, some models for first-order inflation are discussed, and unique signatures that result if inflation is realized in a first-order transition are emphasized. Some of the history of inflation is reviewed to demonstrate how first-order inflation differs from other models.
Cosmological models constructed by van der Waals fluid approximation and volumetric expansion
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Myrzakulov, R.
The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.
2009-02-20
arrears, and foreign currency for essential imports, particularly fuel, is in extremely short supply. The IMF suggests that the inflation rate will not... devalue the official exchange rate. Instead, in June 2006, Gono devalued the country’s currency , the Zimbabwe dollar, removing three zeros in an effort to...23 The IMF and the World Bank
Understanding the Determinants of Debt Burden among College Graduates
ERIC Educational Resources Information Center
Chen, Rong; Wiederspan, Mark
2014-01-01
This article examines debt burden among college graduates and contributes to previous research by incorporating institutional and state characteristics. Utilizing a combination of national datasets and zero-one inflated beta regression, we find several major themes. First, family income and college experiences are strongly associated with the…
Is there scale-dependent bias in single-field inflation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Putter, Roland; Doré, Olivier; Green, Daniel, E-mail: rdputter@caltech.edu, E-mail: Olivier.P.Dore@jpl.nasa.gov, E-mail: drgreen@cita.utoronto.ca
2015-10-01
Scale-dependent halo bias due to local primordial non-Gaussianity provides a strong test of single-field inflation. While it is universally understood that single-field inflation predicts negligible scale-dependent bias compared to current observational uncertainties, there is still disagreement on the exact level of scale-dependent bias at a level that could strongly impact inferences made from future surveys. In this paper, we clarify this confusion and derive in various ways that there is exactly zero scale-dependent bias in single-field inflation. Much of the current confusion follows from the fact that single-field inflation does predict a mode coupling of matter perturbations at the levelmore » of f{sub NL}{sup local}; ≈ −5/3, which naively would lead to scale-dependent bias. However, we show explicitly that this mode coupling cancels out when perturbations are evaluated at a fixed physical scale rather than fixed coordinate scale. Furthermore, we show how the absence of scale-dependent bias can be derived easily in any gauge. This result can then be incorporated into a complete description of the observed galaxy clustering, including the previously studied general relativistic terms, which are important at the same level as scale-dependent bias of order f{sub NL}{sup local} ∼ 1. This description will allow us to draw unbiased conclusions about inflation from future galaxy clustering data.« less
Access to Transportation and Health Care Visits for Medicaid Enrollees With Diabetes.
Thomas, Leela V; Wedel, Kenneth R; Christopher, Jan E
2018-03-01
Diabetes is a chronic condition that requires frequent health care visits for its management. Individuals without nonemergency medical transportation often miss appointments and do not receive optimal care. This study aims to evaluate the association between Medicaid-provided nonemergency medical transportation and diabetes care visits. A retrospective analysis was conducted of demographic and claims data obtained from the Oklahoma Medicaid program. Participants consisted of Medicaid enrollees with diabetes who made at least 1 visit for diabetes care in a year. The sample was predominantly female and white, with an average age of 46.38 years. Two zero-truncated Poisson regression models were estimated to assess the independent effect of transportation use on number of diabetes care visits. Use of nonemergency medical transportation is a significant predictor of diabetes care visits. Zero-truncated Poisson regression coefficients showed a positive association between the use of transportation and number of visits (0.6563, P < .001). Age, gender, race/ethnicity, area of residence, and presence of additional chronic conditions had independent associations with number of visits. Older enrollees were likely to make more visits than younger enrollees with diabetes (0.02382); controlling for all other factors in the model, rural residents made more visits than urban; women made fewer visits than men (-0.09312; P < .001); and minorities made fewer visits than whites, with pronounced differences for Hispanics and Asians compared to whites. Findings underscore the importance of ensuring transportation to Medicaid populations with diabetes, particularly in the rural areas where the prevalence of diabetes and complications are higher and the availability of medical resources lower than in the urban areas. © 2017 National Rural Health Association.
Toward inflation models compatible with the no-boundary proposal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Dong-il; Yeom, Dong-han, E-mail: dongil.j.hwang@gmail.com, E-mail: innocent.yeom@gmail.com
2014-06-01
In this paper, we investigate various inflation models in the context of the no-boundary proposal. We propose that a good inflation model should satisfy three conditions: observational constraints, plausible initial conditions, and naturalness of the model. For various inflation models, we assign the probability to each initial condition using the no-boundary proposal and define a quantitative standard, typicality, to check whether the model satisfies the observational constraints with probable initial conditions. There are three possible ways to satisfy the typicality criterion: there was pre-inflation near the high energy scale, the potential is finely tuned or the inflationary field space ismore » unbounded, or there are sufficient number of fields that contribute to inflation. The no-boundary proposal rejects some of naive inflation models, explains some of traditional doubts on inflation, and possibly, can have observational consequences.« less
Can a supersonically expanding Bose-Einstein Condensates be used to study cosmological inflation?
NASA Astrophysics Data System (ADS)
Banik, Swarnav; Eckel, Stephen; Kumar, Avinash; Jacobson, Ted; Spielman, Ian; Campbell, Gretchen
2017-04-01
The massive scale of the universe makes the experimental study of cosmological inflation difficult. This has led to an interest in developing analogous systems using table top experiments. Here, we present the basic features of an expanding universe by drawing parallels with an expanding toroidal Bose Einstein Condensate (BEC) of 23Na atoms. The toroidal BEC serves as the background vacuum and phonons are the analogue to photons in the expanding universe. We study the dynamics of phonons in both non-expanding and expanding condensates and measure dissipation using the structure factor. We demonstrate red shifting of phonons and quasi-particle production similar to pre-heating after the inflation of universe. At the end of expansion, we also observe spontaneous non-zero winding numbers in the ring. Using Monte-Carlo simulations, we predict the widths of the resulting winding number distribution, which agree well with our experimental findings.
Microwave background anisotropies in quasiopen inflation
NASA Astrophysics Data System (ADS)
García-Bellido, Juan; Garriga, Jaume; Montes, Xavier
1999-10-01
Quasiopenness seems to be generic to multifield models of single-bubble open inflation. Instead of producing infinite open universes, these models actually produce an ensemble of very large but finite inflating islands. In this paper we study the possible constraints from CMB anisotropies on existing models of open inflation. The effect of supercurvature anisotropies combined with the quasiopenness of the inflating regions make some models incompatible with observations, and severely reduces the parameter space of others. Supernatural open inflation and the uncoupled two-field model seem to be ruled out due to these constraints for values of Ω0<~0.98. Others, such as the open hybrid inflation model with suitable parameters for the slow roll potential can be made compatible with observations.
NASA Astrophysics Data System (ADS)
Sofyan, Hizir; Maulia, Eva; Miftahuddin
2017-11-01
A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).
Lipscomb, Hester J; Schoenfisch, Ashley; Cameron, Wilfrid
2013-07-01
We evaluated work-related injuries involving a hand or fingers and associated costs among a cohort of 24,830 carpenters between 1989 and 2008. Injury rates and rate ratios were calculated by using Poisson regression to explore higher risk on the basis of age, sex, time in the union, predominant work, and calendar time. Negative binomial regression was used to model dollars paid per claim after adjustment for inflation and discounting. Hand injuries accounted for 21.1% of reported injuries and 9.5% of paid lost time injuries. Older carpenters had proportionately more amputations, fractures, and multiple injuries, but their rates of these more severe injuries were not higher. Costs exceeded $21 million, a cost burden of $0.11 per hour worked. Older carpenters' higher proportion of serious injuries in the absence of higher rates likely reflects age-related reporting differences.
Chen, Da; Zheng, Xiaoyu
2018-06-14
Nature has evolved with a recurring strategy to achieve unusual mechanical properties through coupling variable elastic moduli from a few GPa to below KPa within a single tissue. The ability to produce multi-material, three-dimensional (3D) micro-architectures with high fidelity incorporating dissimilar components has been a major challenge in man-made materials. Here we show multi-modulus metamaterials whose architectural element is comprised of encoded elasticity ranging from rigid to soft. We found that, in contrast to ordinary architected materials whose negative Poisson's ratio is dictated by their geometry, these type of metamaterials are capable of displaying Poisson's ratios from extreme negative to zero, independent of their 3D micro-architecture. The resulting low density metamaterials is capable of achieving functionally graded, distributed strain amplification capabilities within the metamaterial with uniform micro-architectures. Simultaneous tuning of Poisson's ratio and moduli within the 3D multi-materials could open up a broad array of material by design applications ranging from flexible armor, artificial muscles, to actuators and bio-mimetic materials.
Adiabatic elimination for systems with inertia driven by compound Poisson colored noise.
Li, Tiejun; Min, Bin; Wang, Zhiming
2014-02-01
We consider the dynamics of systems driven by compound Poisson colored noise in the presence of inertia. We study the limit when the frictional relaxation time and the noise autocorrelation time both tend to zero. We show that the Itô and Marcus stochastic calculuses naturally arise depending on these two time scales, and an extra intermediate type occurs when the two time scales are comparable. This leads to three different limiting regimes which are supported by numerical simulations. Furthermore, we establish that when the resulting compound Poisson process tends to the Wiener process in the frequent jump limit the Itô and Marcus calculuses, respectively, tend to the classical Itô and Stratonovich calculuses for Gaussian white noise, and the crossover type calculus tends to a crossover between the Itô and Stratonovich calculuses. Our results would be very helpful for understanding relevant experiments when jump type noise is involved.
Biomechanical remodeling of obstructed guinea pig jejunum
Zhao, Jingbo; Liao, Donghua; Yang, Jian; Gregersen, Hans
2010-01-01
Data on morphological and biomechanical remodeling are needed to understand the mechanisms behind intestinal obstruction. The effect of partial obstruction on mechanical properties with reference to the zero-stress state and on the histomorphological properties of the guinea pig small intestine was determined in this study. Partial obstruction and sham operation were surgically created in mid-jejunum of guinea pigs. The animals survived 2, 4, 7, and 14 days respectively. The age-matched guinea pigs that were not operated served as normal controls. The segment proximal to the obstruction site was used for histological analysis, no-load state and zero-stress state data, and distension test. The segment for distension was immersed in an organ bath and inflated to 10 cmH20. The outer diameter change during the inflation was monitored using a microscope with CCD camera. Circumferential stresses and strains were computed from the diameter, pressure and the zero-stress state data. The opening angle and absolute value of residual strain decreased (P<0.01 and P<0.001) whereas the wall thickness, wall cross-sectional area, and the wall stiffness increased after 7 days obstruction (P<0.05, P<0.01). Histologically, the muscle and submucosa layers, especially the circumferential muscle layer increased in thickness after obstruction. The opening angle and residual strain mainly depended on the thickness of the muscle layer whereas the wall stiffness mainly depended on the thickness of the submucosa layer. In conclusion, the histomorphological and biomechanical properties of small intestine (referenced for the first time to the zero-stress state) remodel proximal to the obstruction site in a time-dependent manner. PMID:20189575
Vergne, Timothée; Calavas, Didier; Cazeau, Géraldine; Durand, Benoît; Dufour, Barbara; Grosbois, Vladimir
2012-06-01
Capture-recapture (CR) methods are used to study populations that are monitored with imperfect observation processes. They have recently been applied to the monitoring of animal diseases to evaluate the number of infected units that remain undetected by the surveillance system. This paper proposes three bayesian models to estimate the total number of scrapie-infected holdings in France from CR count data obtained from the French classical scrapie surveillance programme. We fitted two zero-truncated Poisson (ZTP) models (with and without holding size as a covariate) and a zero-truncated negative binomial (ZTNB) model to the 2006 national surveillance count dataset. We detected a large amount of heterogeneity in the count data, making the use of the simple ZTP model inappropriate. However, including holding size as a covariate did not bring any significant improvement over the simple ZTP model. The ZTNB model proved to be the best model, giving an estimation of 535 (CI(95%) 401-796) infected and detectable sheep holdings in 2006, although only 141 were effectively detected, resulting in a holding-level prevalence of 4.4‰ (CI(95%) 3.2-6.3) and a sensitivity of holding-level surveillance of 26% (CI(95%) 18-35). The main limitation of the present study was the small amount of data collected during the surveillance programme. It was therefore not possible to build complex models that would allow depicting more accurately the epidemiological and detection processes that generate the surveillance data. We discuss the perspectives of capture-recapture count models in the context of animal disease surveillance. Copyright © 2012 Elsevier B.V. All rights reserved.
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
A New Zero-Inflated Negative Binomial Methodology for Latent Category Identification
ERIC Educational Resources Information Center
Blanchard, Simon J.; DeSarbo, Wayne S.
2013-01-01
We introduce a new statistical procedure for the identification of unobserved categories that vary between individuals and in which objects may span multiple categories. This procedure can be used to analyze data from a proposed sorting task in which individuals may simultaneously assign objects to multiple piles. The results of a synthetic…
Parallel Demand-Withdraw Processes in Family Therapy for Adolescent Drug Abuse
Rynes, Kristina N.; Rohrbaugh, Michael J.; Lebensohn-Chialvo, Florencia; Shoham, Varda
2013-01-01
Isomorphism, or parallel process, occurs in family therapy when patterns of therapist-client interaction replicate problematic interaction patterns within the family. This study investigated parallel demand-withdraw processes in Brief Strategic Family Therapy (BSFT) for adolescent drug abuse, hypothesizing that therapist-demand/adolescent-withdraw interaction (TD/AW) cycles observed early in treatment would predict poor adolescent outcomes at follow-up for families who exhibited entrenched parent-demand/adolescent-withdraw interaction (PD/AW) before treatment began. Participants were 91 families who received at least 4 sessions of BSFT in a multi-site clinical trial on adolescent drug abuse (Robbins et al., 2011). Prior to receiving therapy, families completed videotaped family interaction tasks from which trained observers coded PD/AW. Another team of raters coded TD/AW during two early BSFT sessions. The main dependent variable was the number of drug use days that adolescents reported in Timeline Follow-Back interviews 7 to 12 months after family therapy began. Zero-inflated Poisson (ZIP) regression analyses supported the main hypothesis, showing that PD/AW and TD/AW interacted to predict adolescent drug use at follow-up. For adolescents in high PD/AW families, higher levels of TD/AW predicted significant increases in drug use at follow-up, whereas for low PD/AW families, TD/AW and follow-up drug use were unrelated. Results suggest that attending to parallel demand-withdraw processes in parent/adolescent and therapist/adolescent dyads may be useful in family therapy for substance-using adolescents. PMID:23438248
Murray, Regan L.; Chermack, Stephen T.; Walton, Maureen A.; Winters, Jamie; Booth, Brenda M.; Blow, Frederic C.
2008-01-01
Objective: This study focused on the prevalence and predictors of psychological aggression, physical aggression, and injury rates in nonintimate partner relationships in a substance-use disorder treatment sample. Method: The sample included 489 (76% men, 24% women) participants who completed screening measures for inclusion in a randomized control trial for an aggression-prevention treatment. Primary outcome measures included rates of past-year psychological aggression, physical aggression, and injury (both from the participant to nonpartners and from nonpartners to the participant). Potential predictors included individual factors (e.g., age, gender), developmental factors (e.g., family history of drug use, childhood physical abuse), and recent factors (e.g., depression, cocaine use). Results: Rates of participant-tononpartner psychological aggression (83%), physical aggression (61%), and injury (47%) were high, as were rates of nonpartner-to-participant aggression. Bivariate analyses revealed significant relationships between the aggression outcomes and most of the individual, developmental, and recent factors. However, multivariate analyses (zero-inflated Poisson regression) revealed that age, treatment status, current symptoms of depression, heavy periods of drinking, and cocaine use were related most frequently to the occurrence of aggression to and from nonpartners. Conclusions: Nonpartner aggression may be as common within a substance-use disorder sample as partner aggression, and it is associated with heavy drinking episodes, cocaine use, and depressive symptoms. The findings highlight the need for the development of effective violence interventions addressing violence in nonpartner relationship types. PMID:18925348
2012-11-01
report may not be cited for purposes of advertisement . This report has been approved for public release. Acknowledgments The authors would...visible wavelengths, the eye perceives an image as a result of color contrasts that consist of differences in luminance and chromaticity (hue and
Pseudosmooth tribrid inflation
NASA Astrophysics Data System (ADS)
Antusch, Stefan; Nolde, David; Rehman, Mansoor Ur
2012-08-01
We explore a new class of supersymmetric models of inflation where the inflaton is realised as a combination of a Higgs field and (gauge non-singlet) matter fields, using a ``tribrid'' structure of the superpotential. Inflation is associated with a phase transition around GUT scale energies. The inflationary trajectory already preselects the later vacuum after inflation, which has the advantage of automatically avoiding the production of dangerous topological defects at the end of inflation. While at first sight the models look similar to smooth inflation, they feature a waterfall and are therefore only pseudosmooth. The new class of models offers novel possibilities for realising inflation in close contact with particle physics, for instance with supersymmetric GUTs or with supersymmetric flavour models based on family symmetries.
STS-45 crewmembers during zero gravity activities onboard KC-135 NASA 930
NASA Technical Reports Server (NTRS)
1991-01-01
STS-45 Atlantis, Orbiter Vehicle (OV) 104, crewmembers and backup payload specialist participate in zero gravity activities onboard KC-135 NASA 930. The crewmembers, wearing flight suits, float and tumble around an inflated globe during the few seconds of microgravity created by parabolic flight. With his hand on the fuselage ceiling is Payload Specialist Dirk D. Frimout. Clockwise from his position are Mission Specialist (MS) C. Michael Foale, Pilot Brian Duffy, backup Payload Specialist Charles R. Chappell, MS and Payload Commander (PLC) Kathryn D. Sullivan (with eye glasses), Commander Charles F. Bolden, and Payload Specialist Byron K. Lichtenberg.
Comments on SUSY Inflation Models on the Brane
NASA Astrophysics Data System (ADS)
Lee, Lu-Yun; Cheung, Kingman; Lin, Chia-Min
In this paper we consider a class of inflation models on the brane where the dominant part of the inflaton scalar potential does not depend on the inflaton field value during inflation. In particular, we consider supernatural inflation, its hilltop version, A-term inflation, and supersymmetric (SUSY) D- and F-term hybrid inflation on the brane. We show that the parameter space can be broadened, the inflation scale generally can be lowered, and still possible to have the spectral index ns = 0.96.
Seven lessons from manyfield inflation in random potentials
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2018-01-01
We study inflation in models with many interacting fields subject to randomly generated scalar potentials. We use methods from non-equilibrium random matrix theory to construct the potentials and an adaption of the `transport method' to evolve the two-point correlators during inflation. This construction allows, for the first time, for an explicit study of models with up to 100 interacting fields supporting a period of `approximately saddle-point' inflation. We determine the statistical predictions for observables by generating over 30,000 models with 2–100 fields supporting at least 60 efolds of inflation. These studies lead us to seven lessons: i) Manyfield inflation is not single-field inflation, ii) The larger the number of fields, the simpler and sharper the predictions, iii) Planck compatibility is not rare, but future experiments may rule out this class of models, iv) The smoother the potentials, the sharper the predictions, v) Hyperparameters can transition from stiff to sloppy, vi) Despite tachyons, isocurvature can decay, vii) Eigenvalue repulsion drives the predictions. We conclude that many of the `generic predictions' of single-field inflation can be emergent features of complex inflation models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Tartakovsky, Alexandre M.
This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersionmore » initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.« less
Nonlocal Poisson-Fermi model for ionic solvent.
Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob
2016-07-01
We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.
The field-space metric in spiral inflation and related models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erlich, Joshua; Olsen, Jackson; Wang, Zhen
2016-09-22
Multi-field inflation models include a variety of scenarios for how inflation proceeds and ends. Models with the same potential but different kinetic terms are common in the literature. We compare spiral inflation and Dante’s inferno-type models, which differ only in their field-space metric. We justify a single-field effective description in these models and relate the single-field description to a mass-matrix formalism. We note the effects of the nontrivial field-space metric on inflationary observables, and consequently on the viability of these models. We also note a duality between spiral inflation and Dante’s inferno models with different potentials.
Hyperaccretion during tidal disruption events: weakly bound debris envelopes and jets
NASA Astrophysics Data System (ADS)
Coughlin, Eric; Begelman, M. C.
2014-01-01
After the destruction of the star during a tidal disruption event (TDE), the cataclysmic encounter between a star and the supermassive black hole (SMBH) of a galaxy, approximately half of the original stellar debris falls back onto the hole at a rate that can initially exceed the Eddington limit by orders of magnitude. We argue that the angular momentum of this matter is too low to allow it to attain a disk-like configuration with accretion proceeding at a mildly super-Eddington rate, the excess energy being carried away by a combination of radiative losses and radially distributed winds. Instead, we propose that the in-falling gas traps accretion energy until it inflates into a weakly-bound, quasi-spherical structure with gas extending nearly to the poles. We study the structure and evolution of such “Zero-Bernoulli accretion” flows (ZEBRAs) as a model for the super- Eddington phase of TDEs. We argue that such flows cannot stop extremely super-Eddington accretion from occurring, and that once the envelope is maximally inflated, any excess accretion energy escapes through the poles in the form of powerful jets. Similar models, including self-gravity, could be applicable to gamma-ray bursts from collapsars and the growth of supermassive black hole seeds inside quasi-stars.
Power spectrum and non-Gaussianities in anisotropic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dey, Anindya; Kovetz, Ely D.; Paban, Sonia, E-mail: anindya@physics.utexas.edu, E-mail: elykovetz@gmail.com, E-mail: paban@physics.utexas.edu
2014-06-01
We study the planar regime of curvature perturbations for single field inflationary models in an axially symmetric Bianchi I background. In a theory with standard scalar field action, the power spectrum for such modes has a pole as the planarity parameter goes to zero. We show that constraints from back reaction lead to a strong lower bound on the planarity parameter for high-momentum planar modes and use this bound to calculate the signal-to-noise ratio of the anisotropic power spectrum in the CMB, which in turn places an upper bound on the Hubble scale during inflation allowed in our model. Wemore » find that non-Gaussianities for these planar modes are enhanced for the flattened triangle and the squeezed triangle configurations, but show that the estimated values of the f{sub NL} parameters remain well below the experimental bounds from the CMB for generic planar modes (other, more promising signatures are also discussed). For a standard action, f{sub NL} from the squeezed configuration turns out to be larger compared to that from the flattened triangle configuration in the planar regime. However, in a theory with higher derivative operators, non-Gaussianities from the flattened triangle can become larger than the squeezed configuration in a certain limit of the planarity parameter.« less
Hyperaccretion during Tidal Disruption Events: Weakly Bound Debris Envelopes and Jets
NASA Astrophysics Data System (ADS)
Coughlin, Eric R.; Begelman, Mitchell C.
2014-02-01
After the destruction of the star during a tidal disruption event (TDE), the cataclysmic encounter between a star and the supermassive black hole (SMBH) of a galaxy, approximately half of the original stellar debris falls back onto the hole at a rate that can initially exceed the Eddington limit by orders of magnitude. We argue that the angular momentum of this matter is too low to allow it to attain a disk-like configuration with accretion proceeding at a mildly super-Eddington rate, the excess energy being carried away by a combination of radiative losses and radially distributed winds. Instead, we propose that the infalling gas traps accretion energy until it inflates into a weakly bound, quasi-spherical structure with gas extending nearly to the poles. We study the structure and evolution of such "zero-Bernoulli accretion" flows as a model for the super-Eddington phase of TDEs. We argue that such flows cannot stop extremely super-Eddington accretion from occurring, and that once the envelope is maximally inflated, any excess accretion energy escapes through the poles in the form of powerful jets. We compare the predictions of our model to Swift J1644+57, the putative super-Eddington TDE, and show that it can qualitatively reproduce some of its observed features. Similar models, including self-gravity, could be applicable to gamma-ray bursts from collapsars and the growth of SMBH seeds inside quasi-stars.
Fitting cosmic microwave background data with cosmic strings and inflation.
Bevis, Neil; Hindmarsh, Mark; Kunz, Martin; Urrestilla, Jon
2008-01-18
We perform a multiparameter likelihood analysis to compare measurements of the cosmic microwave background (CMB) power spectra with predictions from models involving cosmic strings. Adding strings to the standard case of a primordial spectrum with power-law tilt ns, we find a 2sigma detection of strings: f10=0.11+/-0.05, where f10 is the fractional contribution made by strings in the temperature power spectrum (at l=10). CMB data give moderate preference to the model ns=1 with cosmic strings over the standard zero-strings model with variable tilt. When additional non-CMB data are incorporated, the two models become on a par. With variable ns and these extra data, we find that f10<0.11, which corresponds to Gmicro<0.7x10(-6) (where micro is the string tension and G is the gravitational constant).
Accidental inflation from Kähler uplifting
NASA Astrophysics Data System (ADS)
Ben-Dayan, Ido; Jing, Shenglin; Westphal, Alexander; Wieck, Clemens
2014-03-01
We analyze the possibility of realizing inflation with a subsequent dS vacuum in the Käahler uplifting scenario. The inclusion of several quantum corrections to the 4d effective action evades previous no-go theorems and allows for construction of simple and successful models of string inflation. The predictions of several benchmark models are in accord with current observations, i.e., a red spectral index, negligible non-gaussianity, and spectral distortions similar to the simplest models of inflation. A particularly interesting subclass of models are ``left-rolling" ones, where the overall volume of the compactified dimensions shrinks during inflation. We call this phenomenon ``inflation by deflation" (IBD), where deflation refers to the internal manifold. This subclass has the appealing features of being insensitive to initial conditions, avoiding the overshooting problem, and allowing for observable running α ~ 0.012 and enhanced tensor-to-scalar ratio r ~ 10-5. The latter results differ significantly from many string inflation models.
Inflation at the electroweak scale
NASA Technical Reports Server (NTRS)
Knox, Lloyd; Turner, Michael S.
1993-01-01
We present a model for slow-rollover inflation where the vacuum energy that drives inflation is of the order of G(F) exp -2; unlike most models, the conversion of vacuum energy to radiation ('reheating') is moderately efficient. The scalar field responsible for inflation is a standard-model singlet, develops a vacuum expectation value of 4 x 10 exp 6 GeV, has a mass of about 1 GeV, and can play a role in electroweak phenomena. We also discuss models where the energy scale of inflation is somewhat larger, but still well below the unification scale.
NASA Astrophysics Data System (ADS)
Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico
2005-05-01
In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.
Supernatural inflation: inflation from supersymmetry with no (very) small parameters
NASA Astrophysics Data System (ADS)
Randall, Lisa; SoljačiĆ, Marin; Guth, Alan H.
1996-02-01
Most models of inflation have small parameters, either to guarantee sufficient inflation or the correct magnitude of the density perturbations. In this paper we show that, in supersymmetric theories with weak-scale supersymmetry breaking, one can construct viable inflationary models in which the requisite parameters appear naturally in the form of the ratio of mass scales that are already present in the theory. Successful inflationary models can be constructed from the flat-direction fields of a renormalizable supersymmetric potential, and such models can be realized even in the context of a simple GUT extension of the MSSM. We evade naive ``naturalness'' arguments by allowing for more than one field to be relevant to inflation, as in ``hybrid inflation'' models, and we argue that this is the most natural possibility if inflation fields are to be associated with flat direction fields of a supersymmetric theory. Such models predict a very low Hubble constant during inflation, of order 103-104 GeV, a scalar density perturbation index n which is very close to or greater than unity, and negligible tensor perturbations. In addition, these models lead to a large spike in the density perturbation spectrum at short wavelengths.
Stationary spiral flow in polytropic stellar models
Pekeris, C. L.
1980-01-01
It is shown that, in addition to the static Emden solution, a self-gravitating polytropic gas has a dynamic option in which there is stationary flow along spiral trajectories wound around the surfaces of concentric tori. The motion is obtained as a solution of a partial differential equation which is satisfied by the meridional stream function, coupled with Poisson's equation and a Bernoulli-type equation for the pressure (density). The pressure is affected by the whole of the Bernoulli term rather than by the centrifugal part only, which acts for a rotating model, and it may be reduced down to zero at the center. The spiral type of flow is illustrated for an incompressible fluid (n = 0), for which an exact solution is obtained. The features of the dynamic constant-density model are discussed as a basis for future comparison with the solution for compressible models. PMID:16592825
2-dimensional models of rapidly rotating stars I. Uniformly rotating zero age main sequence stars
NASA Astrophysics Data System (ADS)
Roxburgh, I. W.
2004-12-01
We present results for 2-dimensional models of rapidly rotating main sequence stars for the case where the angular velocity Ω is constant throughout the star. The algorithm used solves for the structure on equipotential surfaces and iteratively updates the total potential, solving Poisson's equation by Legendre polynomial decomposition; the algorithm can readily be extended to include rotation constant on cylinders. We show that this only requires a small number of Legendre polynomials to accurately represent the solution. We present results for models of homogeneous zero age main sequence stars of mass 1, 2, 5, 10 M⊙ with a range of angular velocities up to break up. The models have a composition X=0.70, Z=0.02 and were computed using the OPAL equation of state and OPAL/Alexander opacities, and a mixing length model of convection modified to include the effect of rotation. The models all show a decrease in luminosity L and polar radius Rp with increasing angular velocity, the magnitude of the decrease varying with mass but of the order of a few percent for rapid rotation, and an increase in equatorial radius Re. Due to the contribution of the gravitational multipole moments the parameter Ω2 Re3/GM can exceed unity in very rapidly rotating stars and Re/Rp can exceed 1.5.
Atmospheric pollutants and hospital admissions due to pneumonia in children
Negrisoli, Juliana; Nascimento, Luiz Fernando C.
2013-01-01
OBJECTIVE: To analyze the relationship between exposure to air pollutants and hospitalizations due to pneumonia in children of Sorocaba, São Paulo, Brazil. METHODS: Time series ecological study, from 2007 to 2008. Daily data were obtained from the State Environmental Agency for Pollution Control for particulate matter, nitric oxide, nitrogen dioxide, ozone, besides air temperature and relative humidity. The data concerning pneumonia admissions were collected in the public health system of Sorocaba. Correlations between the variables of interest using Pearson cofficient were calculated. Models with lags from zero to five days after exposure to pollutants were performed to analyze the association between the exposure to environmental pollutants and hospital admissions. The analysis used the generalized linear model of Poisson regression, being significant p<0.05. RESULTS: There were 1,825 admissions for pneumonia, with a daily mean of 2.5±2.1. There was a strong correlation between pollutants and hospital admissions, except for ozone. Regarding the Poisson regression analysis with the multi-pollutant model, only nitrogen dioxide was statistically significant in the same day (relative risk - RR=1.016), as well as particulate matter with a lag of four days (RR=1.009) after exposure to pollutants. CONCLUSIONS: There was an acute effect of exposure to nitrogen dioxide and a later effect of exposure to particulate matter on children hospitalizations for pneumonia in Sorocaba. PMID:24473956
NASA Astrophysics Data System (ADS)
Kadota, Kenji; Kobayashi, Tatsuo; Saga, Ikumi; Sumita, Keigo
2018-04-01
We propose a new model of single-field D-term inflation in supergravity, where the inflation is driven by a single modulus field which transforms non-linearly under the U(1) gauge symmetry. One of the notable features of our modulus D-term inflation scenario is that the global U(1) remains unbroken in the vacuum and hence our model is not plagued by the cosmic string problem which can exclude most of the conventional D-term inflation models proposed so far due to the CMB observations.
Factors associated with primary care prescription of opioids for joint pain.
Green, D J; Bedson, J; Blagojevic-Burwell, M; Jordan, K P; van der Windt, D
2013-02-01
Opioids are commonly prescribed in primary care and can offer pain relief but may also have adverse effects. Little is known about the characteristics of people likely to receive an opioid prescription in primary care. The aim is to identify what factors are associated with primary care prescribing of high-strength analgesics in a community sample of older people with joint pain. A prospective two-stage postal survey completed at baseline and 3-year follow-up in a population aged 50 and over registered with eight general practitioner (GP) practices in North Staffordshire (North Staffordshire Osteoarthritis Project cohorts) linked with data from medical records. Participants were selected who reported joint pain in one or more joints at baseline. Outcome measures were the number of prescriptions for high-strength pain medication (opioids) in the following 3 years. Socio-demographic and health status factors associated with prescription were assessed using a zero-inflated Poisson model. 873 (19%) people were prescribed opioids (out of 4652 providing complete data) ranging from 1 to 76 prescriptions over 3 years. Baseline factors significantly associated with increased rates of prescription were younger age group [65-74 group: incidence rate ratio (IRR) = 1.26 (1.18-1.35)], male gender [IRR = 1.17 (1.12-1.23)], severe joint pain [IRR = 1.19 (1.12-1.26)] poor physical function [IRR = 0.99 (0.99-0.99)] and lower frequency of alcohol consumption [once/twice a year: IRR = 1.13 (1.06-1.21), never: IRR = 1.14 (1.06-1.22)]. Restricting the analysis to those without prior prescriptions for strong opioids showed similar results. Poor physical function and participation restrictions were strongly associated with prescriptions of stronger opioids in addition to several socio-demographic and lifestyle factors. Given the uncertainties over the effectiveness and risks of opioid use, future research could investigate decision making of GPs, exploring reasons for prescribing them. © 2012 European Federation of International Association for the Study of Pain Chapters.
NASA Astrophysics Data System (ADS)
Jegasothy, Edward; McGuire, Rhydwyn; Nairn, John; Fawcett, Robert; Scalley, Benjamin
2017-08-01
Periods of successive extreme heat and cold temperature have major effects on human health and increase rates of health service utilisation. The severity of these events varies between geographic locations and populations. This study aimed to estimate the effects of heat waves and cold waves on health service utilisation across urban, regional and remote areas in New South Wales (NSW), Australia, during the 10-year study period 2005-2015. We divided the state into three regions and used 24 over-dispersed or zero-inflated Poisson time-series regression models to estimate the effect of heat waves and cold waves, of three levels of severity, on the rates of ambulance call-outs, emergency department (ED) presentations and mortality. We defined heat waves and cold waves using excess heat factor (EHF) and excess cold factor (ECF) metrics, respectively. Heat waves generally resulted in increased rates of ambulance call-outs, ED presentations and mortality across the three regions and the entire state. For all of NSW, very intense heat waves resulted in an increase of 10.8% (95% confidence interval (CI) 4.5, 17.4%) in mortality, 3.4% (95% CI 0.8, 7.8%) in ED presentations and 10.9% (95% CI 7.7, 14.2%) in ambulance call-outs. Cold waves were shown to have significant effects on ED presentations (9.3% increase for intense events, 95% CI 8.0-10.6%) and mortality (8.8% increase for intense events, 95% CI 2.1-15.9%) in outer regional and remote areas. There was little evidence for an effect from cold waves on health service utilisation in major cities and inner regional areas. Heat waves have a large impact on health service utilisation in NSW in both urban and rural settings. Cold waves also have significant effects in outer regional and remote areas. EHF is a good predictor of health service utilisation for heat waves, although service needs may differ between urban and rural areas.
The role of socioeconomic status in longitudinal trends of cholera in Matlab, Bangladesh, 1993-2007.
Root, Elisabeth Dowling; Rodd, Joshua; Yunus, Mohammad; Emch, Michael
2013-01-01
There has been little evidence of a decline in the global burden of cholera in recent years as the number of cholera cases reported to WHO continues to rise. Cholera remains a global threat to public health and a key indicator of lack of socioeconomic development. Overall socioeconomic development is the ultimate solution for control of cholera as evidenced in developed countries. However, most research has focused on cross-county comparisons so that the role of individual- or small area-level socioeconomic status (SES) in cholera dynamics has not been carefully studied. Reported cases of cholera in Matlab, Bangladesh have fluctuated greatly over time and epidemic outbreaks of cholera continue, most recently with the introduction of a new serotype into the region. The wealth of longitudinal data on the population of Matlab provides a unique opportunity to explore the impact of socioeconomic status and other demographic characteristics on the long-term temporal dynamics of cholera in the region. In this population-based study we examine which factors impact the initial number of cholera cases in a bari at the beginning of the 0139 epidemic and the factors impacting the number of cases over time. Cholera data were derived from the ICDDR,B health records and linked to socioeconomic and geographic data collected as part of the Matlab Health and Demographic Surveillance System. Longitudinal zero-inflated Poisson (ZIP) multilevel regression models are used to examine the impact of environmental and socio-demographic factors on cholera counts across baris. Results indicate that baris with a high socioeconomic status had lower initial rates of cholera at the beginning of the 0139 epidemic (γ(01) = -0.147, p = 0.041) and a higher probability of reporting no cholera cases (α(01) = 0.156, p = 0.061). Populations in baris characterized by low SES are more likely to experience higher cholera morbidity at the beginning of an epidemic than populations in high SES baris.
Nwaru, Chioma A; Peutere, Laura; Kivimäki, Mika; Pentti, Jaana; Vahtera, Jussi; Virtanen, Pekka J
2017-11-01
Little is known about the work patterns of re-employed people. We investigated the labour market attachment trajectories of re-employed people and assessed the influence of chronic diseases on these trajectories. The study was based on register data of 18 944 people (aged 18-60 years) who participated in a subsidised re-employment programme in Finland. Latent class growth analysis with zero-inflated Poisson was used to model the labour market attachment trajectories over a 6-year follow-up time. Multinomial logistic regression was used to examine the associations between chronic diseases and labour market attachment trajectories, adjusting for age, gender, educational level, size of town and calendar year in subsidised re-employment programme. We identified four distinct labour market attachment trajectories, namely: strengthening (a relatively stable attachment throughout the follow-up time; 77%), delayed (initial weak attachment increasing later; 6%), leavers (attachment declined with time; 10%) and none-attached (weak attachment throughout the study period; 7%). We found that severe mental problems strongly increased the likelihood of belonging in the leavers (OR 3.61; 95% CI 2.23 to 5.37) and none-attached (OR 3.41; 95% CI 1.91 to 6.10) trajectories, while chronic hypertension was associated with none-attached (OR 1.37; 95% CI 1.06 to 1.77) trajectory. The associations between other chronic diseases (diabetes, heart disease, asthma and arthritics) and labour market attachment trajectories were less evident. Re-employed people appear to follow distinct labour market attachment trajectories over time. Having chronic diseases, especially mental disorders appear to increase the risk for relatively poor labour market attachment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Childhood temperament predictors of adolescent physical activity.
Janssen, James A; Kolacz, Jacek; Shanahan, Lilly; Gangel, Meghan J; Calkins, Susan D; Keane, Susan P; Wideman, Laurie
2017-01-05
Physical inactivity is a leading cause of mortality worldwide. Many patterns of physical activity involvement are established early in life. To date, the role of easily identifiable early-life individual predictors of PA, such as childhood temperament, remains relatively unexplored. Here, we tested whether childhood temperamental activity level, high intensity pleasure, low intensity pleasure, and surgency predicted engagement in physical activity (PA) patterns 11 years later in adolescence. Data came from a longitudinal community study (N = 206 participants, 53% females, 70% Caucasian). Parents reported their children's temperamental characteristics using the Child Behavior Questionnaire (CBQ) when children were 4 & 5 years old. Approximately 11 years later, adolescents completed self-reports of PA using the Godin Leisure Time Exercise Questionnaire and the Youth Risk Behavior Survey. Ordered logistic regression, ordinary least squares linear regression, and Zero-inflated Poisson regression models were used to predict adolescent PA from childhood temperament. Race, socioeconomic status, and adolescent body mass index were used as covariates. Males with greater childhood temperamental activity level engaged in greater adolescent PA volume (B = .42, SE = .13) and a 1 SD difference in childhood temperamental activity level predicted 29.7% more strenuous adolescent PA per week. Males' high intensity pleasure predicted higher adolescent PA volume (B = .28, SE = .12). Males' surgency positively predicted more frequent PA activity (B = .47, SE = .23, OR = 1.61, 95% CI: 1.02, 2.54) and PA volume (B = .31, SE = .12). No predictions from females' childhood temperament to later PA engagement were identified. Childhood temperament may influence the formation of later PA habits, particularly in males. Boys with high temperamental activity level, high intensity pleasure, and surgency may directly seek out pastimes that involve PA. Indirectly, temperament may also influence caregivers' perceptions of optimal activity choices for children. Understanding how temperament influences the development of PA patterns has the potential to inform efforts aimed at promoting long-term PA engagement and physical health.
Neighborhood context and immigrant children's physical activity.
Brewer, Mackenzie; Kimbro, Rachel Tolbert
2014-09-01
Physical activity is an important determinant of obesity and overall health for children, but significant race/ethnic and nativity disparities exist in the amount of physical activity that children receive, with immigrant children particularly at risk for low levels of physical activity. In this paper, we examine and compare patterns in physical activity levels for young children of U.S.-born and immigrant mothers from seven race/ethnic and nativity groups, and test whether physical activity is associated with subjective (parent-reported) and objective (U.S. Census) neighborhood measures. The neighborhood measures include parental-reported perceptions of safety and physical and social disorder and objectively defined neighborhood socioeconomic disadvantage and immigrant concentration. Using restricted, geo-coded Early Childhood Longitudinal Study-Kindergarten (ECLS-K) data (N = 17,510) from 1998 to 1999 linked with U.S. Census 2000 data for the children's neighborhoods, we utilize zero-inflated Poisson (ZIP) models to predict the odds of physical inactivity and expected days of physical activity for kindergarten-aged children. Across both outcomes, foreign-born children have lower levels of physical activity compared to U.S.-born white children. This disparity is not attenuated by a child's socioeconomic, family, or neighborhood characteristics. Physical and social disorder is associated with higher odds of physical inactivity, while perceptions of neighborhood safety are associated with increased expected days of physical activity, but not with inactivity. Immigrant concentration is negatively associated with both physical activity outcomes, but its impact on the probability of physical inactivity differs by the child's race/ethnic and nativity group, such that it is particularly detrimental for U.S.-born white children's physical activity. Research interested in improving the physical activity patterns of minority and second-generation immigrant children should consider how neighborhood context differentially impacts the health and physical activity of children from various racial, ethnic and nativity backgrounds. Copyright © 2014 Elsevier Ltd. All rights reserved.
Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.
Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio
2014-11-24
The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.
Very low scale Coleman-Weinberg inflation with nonminimal coupling
NASA Astrophysics Data System (ADS)
Kaneta, Kunio; Seto, Osamu; Takahashi, Ryo
2018-03-01
We study viable small-field Coleman-Weinberg (CW) inflation models with the help of nonminimal coupling to gravity. The simplest small-field CW inflation model (with a low-scale potential minimum) is incompatible with the cosmological constraint on the scalar spectral index. However, there are possibilities to make the model realistic. First, we revisit the CW inflation model supplemented with a linear potential term. We next consider the CW inflation model with a logarithmic nonminimal coupling and illustrate that the model can open a new viable parameter space that includes the model with a linear potential term. We also show parameter spaces where the Hubble scale during the inflation can be as small as 10-4 GeV , 1 GeV, 1 04 GeV , and 1 08 GeV for the number of e -folds of 40, 45, 50, and 55, respectively, with other cosmological constraints being satisfied.
Deflation of the cosmological constant associated with inflation and dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi, E-mail: geng@phys.nthu.edu.tw, E-mail: chungchi@mx.nthu.edu.tw
2016-06-01
In order to solve the fine-tuning problem of the cosmological constant, we propose a simple model with the vacuum energy non-minimally coupled to the inflaton field. In this model, the vacuum energy decays to the inflaton during pre-inflation and inflation eras, so that the cosmological constant effectively deflates from the Planck mass scale to a much smaller one after inflation and plays the role of dark energy in the late-time of the universe. We show that our deflationary scenario is applicable to arbitrary slow-roll inflation models. We also take two specific inflation potentials to illustrate our results.
Nonlinear Poisson Equation for Heterogeneous Media
Hu, Langhua; Wei, Guo-Wei
2012-01-01
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937
Gravitational waves from warm inflation
NASA Astrophysics Data System (ADS)
Li, Xi-Bin; Wang, He; Zhu, Jian-Yang
2018-03-01
A fundamental prediction of inflation is a nearly scale-invariant spectrum of gravitational wave. The features of such a signal provide extremely important information about the physics of the early universe. In this paper, we focus on several topics about warm inflation. First, we discuss the stability property about warm inflation based on nonequilibrium statistical mechanics, which gives more fundamental physical illustrations to thermal property of such model. Then, we calculate the power spectrum of gravitational waves generated during warm inflation, in which there are three components contributing to such spectrum: thermal term, quantum term, and cross term combining the both. We also discuss some interesting properties about these terms and illustrate them in different panels. As a model different from cold inflation, warm inflation model has its individual properties in observational practice, so we finally give a discussion about the observational effect to distinguish it from cold inflation.
Lamm, Steven H; Robbins, Shayhan A; Zhou, Chao; Lu, Jun; Chen, Rusan; Feinleib, Manning
2013-02-01
To examine the analytic role of arsenic exposure on cancer mortality among the low-dose (well water arsenic level <150 μg/L) villages in the Blackfoot-disease (BFD) endemic area of southwest Taiwan and with respect to the southwest regional data. Poisson analyses of the bladder and lung cancer deaths with respect to arsenic exposure (μg/kg/day) for the low-dose (<150 μg/L) villages with exposure defined by the village median, mean, or maximum and with or without regional data. Use of the village median well water arsenic level as the exposure metric introduced misclassification bias by including villages with levels >500 μg/L, but use of the village mean or the maximum did not. Poisson analyses using mean or maximum arsenic levels showed significant negative cancer slope factors for models of bladder cancers and of bladder and lung cancers combined. Inclusion of the southwest Taiwan regional data did not change the findings when the model contained an explanatory variable for non-arsenic differences. A positive slope could only be generated by including the comparison population as a separate data point with the assumption of zero arsenic exposure from drinking water and eliminating the variable for non-arsenic risk factors. The cancer rates are higher among the low-dose (<150 μg/L) villages in the BFD area than in the southwest Taiwan region. However, among the low-dose villages in the BFD area, cancer risks suggest a negative association with well water arsenic levels. Positive differences from regional data seem attributable to non-arsenic ecological factors. Copyright © 2012 Elsevier Inc. All rights reserved.
Accidental inflation from Kähler uplifting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Dayan, Ido; Westphal, Alexander; Wieck, Clemens
2014-03-01
We analyze the possibility of realizing inflation with a subsequent dS vacuum in the Käahler uplifting scenario. The inclusion of several quantum corrections to the 4d effective action evades previous no-go theorems and allows for construction of simple and successful models of string inflation. The predictions of several benchmark models are in accord with current observations, i.e., a red spectral index, negligible non-gaussianity, and spectral distortions similar to the simplest models of inflation. A particularly interesting subclass of models are ''left-rolling'' ones, where the overall volume of the compactified dimensions shrinks during inflation. We call this phenomenon ''inflation by deflation''more » (IBD), where deflation refers to the internal manifold. This subclass has the appealing features of being insensitive to initial conditions, avoiding the overshooting problem, and allowing for observable running α ∼ 0.012 and enhanced tensor-to-scalar ratio r ∼ 10{sup −5}. The latter results differ significantly from many string inflation models.« less
2008-09-26
foreign currency for essential imports, particularly fuel, is in extremely short supply. The IMF suggests that the inflation rate will not reverse without...international assessments of Zimbabwe’s economic prospects remain bleak. Ignoring the advice of the IMF , the government has refused to devalue the official...exchange rate. Instead, in June 2006, Gono devalued the country’s currency , the Zimbabwe dollar, removing three zeros in an effort to mitigate
Jesse D. Young; Nathaniel M. Anderson; Helen T. Naughton; Katrina Mullan
2018-01-01
Abundant stocks of woody biomass that are associated with active forest management can be used as fuel for bioenergy in many applications. Though factors driving large-scale biomass use in industrial settings have been studied extensively, small-scale biomass combustion systems commonly used by institutions for heating have received less attention. A zero inflated...
Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)
NASA Astrophysics Data System (ADS)
Maulia, Eva; Miftahuddin; Sofyan, Hizir
2018-05-01
A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.
A new model for CD8+ T cell memory inflation based upon a recombinant adenoviral vector1
Bolinger, Beatrice; Sims, Stuart; O’Hara, Geraldine; de Lara, Catherine; Tchilian, Elma; Firner, Sonja; Engeler, Daniel; Ludewig, Burkhard; Klenerman, Paul
2013-01-01
CD8+ T cell memory inflation, first described in murine cytomegalovirus (MCMV) infection, is characterized by the accumulation of high-frequency, functional antigen-specific CD8+ T cell pools with an effector-memory phenotype and enrichment in peripheral organs. Although persistence of antigen is considered essential, the rules underpinning memory inflation are still unclear. The MCMV model is, however, complicated by the virus’s low-level persistence, and stochastic reactivation. We developed a new model of memory inflation based upon a βgal-recombinant adenovirus vector (Ad-LacZ). After i.v. administration in C57BL/6 mice we observe marked memory inflation in the βgal96 epitope, while a second epitope, βgal497, undergoes classical memory formation. The inflationary T cell responses show kinetics, distribution, phenotype and functions similar to those seen in MCMV and are reproduced using alternative routes of administration. Memory inflation in this model is dependent on MHC Class II. As in MCMV, only the inflating epitope showed immunoproteasome-independence. These data define a new model for memory inflation, which is fully replication-independent, internally controlled and reproduces the key immunologic features of the CD8+ T cell response. This model provides insight into the mechanisms responsible for memory inflation, and since it is based on a vaccine vector, also is relevant to novel T cell-inducing vaccines in humans. PMID:23509359
Gillis, Jennifer; Bayoumi, Ahmed M; Burchell, Ann N; Cooper, Curtis; Klein, Marina B; Loutfy, Mona; Machouf, Nima; Montaner, Julio Sg; Tsoukas, Chris; Hogg, Robert S; Raboud, Janet
2015-10-26
As the average age of the HIV-positive population increases, there is increasing need to monitor patients for the development of comorbidities as well as for drug toxicities. We examined factors associated with the frequency of measurement of liver enzymes, renal function tests, and lipid levels among participants of the Canadian Observational Cohort (CANOC) collaboration which follows people who initiated HIV antiretroviral therapy in 2000 or later. We used zero-inflated negative binomial regression models to examine the associations of demographic and clinical characteristics with the rates of measurement during follow-up. Generalized estimating equations with a logit link were used to examine factors associated with gaps of 12 months or more between measurements. Electronic laboratory data were available for 3940 of 7718 CANOC participants. The median duration of electronic follow-up was 3.5 years. The median (interquartile) rates of tests per year were 2.76 (1.60, 3.73), 2.55 (1.44, 3.38) and 1.42 (0.50, 2.52) for liver, renal and lipid parameters, respectively. In multivariable zero-inflated negative binomial regression models, individuals infected through injection drug use (IDU) were significantly less likely to have any measurements. Among participants with at least one measurement, rates of measurement of liver, renal and lipid tests were significantly lower for younger individuals and Aboriginal Peoples. Hepatitis C co-infected individuals with a history of IDU had lower rates of measurement and were at greater risk of having 12 month gaps between measurements. Hepatitis C co-infected participants infected through IDU were at increased risk of gaps in testing, despite publicly funded health care and increased risk of comorbid conditions. This should be taken into consideration in analyses examining factors associated with outcomes based on laboratory parameters.
Structural testing and analysis of a braided, inflatable fabric torus structure
NASA Astrophysics Data System (ADS)
Young, Andrew C.; Davids, William G.; Whitney, Daniel J.; Clapp, Joshua D.; Goupee, Andrew J.
2017-10-01
Inflatable structural members have military, disaster relief, aerospace and other important applications as they possess low mass, can be stored in a relatively small volume and have significant load-carrying capacity once pressurized. Of particular interest to the present research is the Hypersonic Inflatable Aerodynamic Decelerator (HIAD) structure under development by NASA. In order to make predictions about the structural response of the HIAD system, it is necessary to understand the response of individual inflatable tori composing the HIAD structure. These inflatable members present unique challenges to structural testing and modeling due to their internal inflation pressure and relative compliance. Structural testing was performed on a braided, inflatable, toroidal structural member with axial reinforcing cords. The internal inflation pressure, magnitude of enforced displacement and loading methodology were varied. In-plane and out-of-plane experimental results were compared to model predictions using a three dimensional, corotational, flexibility-based fiber-beam finite element model including geometric and material nonlinearities, as well as the effects of inflation pressure. It was found that in order to approximate the load-deformation response observed in experimentation it is necessary to carefully control the test and model boundary conditions and loading scheme.
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.
Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth
2015-01-01
This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.
Derivation of Poisson and Nernst-Planck equations in a bath and channel from a molecular model.
Schuss, Z; Nadler, B; Eisenberg, R S
2001-09-01
Permeation of ions from one electrolytic solution to another, through a protein channel, is a biological process of considerable importance. Permeation occurs on a time scale of micro- to milliseconds, far longer than the femtosecond time scales of atomic motion. Direct simulations of atomic dynamics are not yet possible for such long-time scales; thus, averaging is unavoidable. The question is what and how to average. In this paper, we average a Langevin model of ionic motion in a bulk solution and protein channel. The main result is a coupled system of averaged Poisson and Nernst-Planck equations (CPNP) involving conditional and unconditional charge densities and conditional potentials. The resulting NP equations contain the averaged force on a single ion, which is the sum of two components. The first component is the gradient of a conditional electric potential that is the solution of Poisson's equation with conditional and permanent charge densities and boundary conditions of the applied voltage. The second component is the self-induced force on an ion due to surface charges induced only by that ion at dielectric interfaces. The ion induces surface polarization charge that exerts a significant force on the ion itself, not present in earlier PNP equations. The proposed CPNP system is not complete, however, because the electric potential satisfies Poisson's equation with conditional charge densities, conditioned on the location of an ion, while the NP equations contain unconditional densities. The conditional densities are closely related to the well-studied pair-correlation functions of equilibrium statistical mechanics. We examine a specific closure relation, which on the one hand replaces the conditional charge densities by the unconditional ones in the Poisson equation, and on the other hand replaces the self-induced force in the NP equation by an effective self-induced force. This effective self-induced force is nearly zero in the baths but is approximately equal to the self-induced force in and near the channel. The charge densities in the NP equations are interpreted as time averages over long times of the motion of a quasiparticle that diffuses with the same diffusion coefficient as that of a real ion, but is driven by the averaged force. In this way, continuum equations with averaged charge densities and mean-fields can be used to describe permeation through a protein channel.
Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy
2008-05-01
This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.
Inflation model selection meets dark radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tram, Thomas; Vallance, Robert; Vennin, Vincent, E-mail: thomas.tram@port.ac.uk, E-mail: robert.vallance@student.manchester.ac.uk, E-mail: vincent.vennin@port.ac.uk
2017-01-01
We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard ΛCDM model and an extension including dark radiation parametrised by its effective number of relativistic species N {sub eff}. Using a minimal dataset (Planck low-ℓ polarisation, temperature power spectrum and lensing reconstruction), we find that the observational status of most inflationary models is unchanged. The exceptionsmore » are potentials such as power-law inflation that predict large values for the scalar spectral index that can only be realised when N {sub eff} is allowed to vary. Adding baryon acoustic oscillations data and the B-mode data from BICEP2/Keck makes power-law inflation disfavoured, while adding local measurements of the Hubble constant H {sub 0} makes power-law inflation slightly favoured compared to the best single-field plateau potentials. This illustrates how the dark radiation solution to the H {sub 0} tension would have deep consequences for inflation model selection.« less
Issues on generating primordial anisotropies at the end of inflation
NASA Astrophysics Data System (ADS)
Emami, Razieh; Firouzjahi, Hassan
2012-01-01
We revisit the idea of generating primordial anisotropies at the end of inflation in models of inflation with gauge fields. To be specific we consider the charged hybrid inflation model where the waterfall field is charged under a U(1) gauge field so the surface of end of inflation is controlled both by inflaton and the gauge fields. Using δN formalism properly we find that the anisotropies generated at the end of inflation from the gauge field fluctuations are exponentially suppressed on cosmological scales. This is because the gauge field evolves exponentially during inflation while in order to generate appreciable anisotropies at the end of inflation the spectator gauge field has to be frozen. We argue that this is a generic feature, that is, one can not generate observable anisotropies at the end of inflation within an FRW background.
Carrel, Margaret; Voss, Paul; Streatfield, Peter K; Yunus, Mohammad; Emch, Michael
2010-03-22
Alteration of natural or historical aquatic flows can have unintended consequences for regions where waterborne diseases are endemic and where the epidemiologic implications of such change are poorly understood. The implementation of flood protection measures for a portion of an intensely monitored population in Matlab, Bangladesh, allows us to examine whether cholera outcomes respond positively or negatively to measures designed to control river flooding. Using a zero inflated negative binomial model, we examine how selected covariates can simultaneously account for household clusters reporting no cholera from those with positive counts as well as distinguishing residential areas with low counts from areas with high cholera counts. Our goal is to examine how residence within or outside a flood protected area interacts with the probability of cholera presence and the effect of flood protection on the magnitude of cholera prevalence. In Matlab, living in a household that is protected from annual monsoon flooding appears to have no significant effect on whether the household experiences cholera, net of other covariates. However, counter-intuitively, among households where cholera is reported, living within the flood protected region significantly increases the number of cholera cases. The construction of dams or other water impoundment strategies for economic or social motives can have profound and unanticipated consequences for waterborne disease. Our results indicate that the construction of a flood control structure in rural Bangladesh is correlated with an increase in cholera cases for residents protected from annual monsoon flooding. Such a finding requires attention from both the health community and from governments and non-governmental organizations involved in ongoing water management schemes.
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
Nonlinear Poisson equation for heterogeneous media.
Hu, Langhua; Wei, Guo-Wei
2012-08-22
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Campbell, Kieran R; Yau, Christopher
2017-03-15
Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.
A Linearized and Incompressible Constitutive Model for Arteries
Liu, Y.; Zhang, W.; Wang, C.; Kassab, G. S.
2011-01-01
In many biomechanical studies, blood vessels can be modeled as pseudoelastic orthotropic materials that are incompressible (volume-preserving) under physiological loading. To use a minimum number of elastic constants to describe the constitutive behavior of arteries, we adopt a generalized Hooke’s law for the co-rotational Cauchy stress and a recently proposed logarithmic-exponential strain. This strain tensor absorbs the material nonlinearity and its trace is zero for volume-preserving deformations. Thus, the relationships between model parameters due to the incompressibility constraint are easy to analyze and interpret. In particular, the number of independent elastic constants reduces from ten to seven in the orthotropic model. As an illustratory study, we fit this model to measured data of porcine coronary arteries in inflation-stretch tests. Four parameters, n (material nonlinearity), Young’s moduli E1 (circumferential), E2 (axial), and E3 (radial) are necessary to fit the data. The advantages and limitations of this model are discussed. PMID:21605567
A linearized and incompressible constitutive model for arteries.
Liu, Y; Zhang, W; Wang, C; Kassab, G S
2011-10-07
In many biomechanical studies, blood vessels can be modeled as pseudoelastic orthotropic materials that are incompressible (volume-preserving) under physiological loading. To use a minimum number of elastic constants to describe the constitutive behavior of arteries, we adopt a generalized Hooke's law for the co-rotational Cauchy stress and a recently proposed logarithmic-exponential strain. This strain tensor absorbs the material nonlinearity and its trace is zero for volume-preserving deformations. Thus, the relationships between model parameters due to the incompressibility constraint are easy to analyze and interpret. In particular, the number of independent elastic constants reduces from ten to seven in the orthotropic model. As an illustratory study, we fit this model to measured data of porcine coronary arteries in inflation-stretch tests. Four parameters, n (material nonlinearity), Young's moduli E₁ (circumferential), E₂ (axial), and E₃ (radial) are necessary to fit the data. The advantages and limitations of this model are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
A generalized right truncated bivariate Poisson regression model with applications to health data.
Islam, M Ataharul; Chowdhury, Rafiqul I
2017-01-01
A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.
A generalized right truncated bivariate Poisson regression model with applications to health data
Islam, M. Ataharul; Chowdhury, Rafiqul I.
2017-01-01
A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. PMID:28586344
Socioeconomic differences in alcohol-related risk-taking behaviours.
Livingston, Michael
2014-11-01
There is substantial research showing that low socioeconomic position is a predictor of negative outcomes from alcohol consumption, while alcohol consumption itself does not exhibit a strong social gradient. This study aims to examine socioeconomic differences in self-reported alcohol-related risk-taking behaviour to explore whether differences in risk-taking while drinking may explain some of the socioeconomic disparities in alcohol-related harm. Cross-sectional data from current drinkers (n = 21 452) in the 2010 wave of the Australian National Drug Strategy Household Survey were used. Ten items on risk-taking behaviour while drinking were combined into two risk scores, and zero-inflated Poisson regression was used to assess the relationship between socioeconomic position and risk-taking while controlling for age, sex and alcohol consumption. Socioeconomically advantaged respondents reported substantially higher rates of alcohol-related hazardous behaviour than socioeconomically disadvantaged respondents. Controlling for age, sex, volume of drinking and frequency of heavy drinking, respondents living in the most advantaged quintile of neighbourhoods reported significantly higher rates of hazardous behaviour than those in the least advantaged quintile. A similar pattern was evident for household income. Socioeconomically advantaged Australians engage in alcohol-related risky behaviour at higher rates than more disadvantaged Australians even with alcohol consumption controlled. The significant socioeconomic disparities in negative consequences linked to alcohol consumption cannot in this instance be explained via differences in behaviour while drinking. Other factors not directly related to alcohol consumption may be responsible for health inequalities in outcomes with significant alcohol involvement. © 2014 Australasian Professional Society on Alcohol and other Drugs.
Association of Maternal Depressive Symptoms and Offspring Physical Health in Low-Income Families.
Thompson, Sarah M; Jiang, Lu; Hammen, Constance; Whaley, Shannon E
2018-06-01
Objectives The present study sought to examine the association between maternal depressive symptoms and characteristics of offspring physical health, including health status, health behaviors, and healthcare utilization, among low-income families. Maternal engagement was explored as a mediator of observed effects. Methods Cross-sectional survey data from a community sample of 4589 low-income women and their preschool-age children participating in the WIC program in Los Angeles County were analyzed using logistic, Poisson, and zero-inflated negative binomial regression. Mediation was tested via conditional process analyses. Results After controlling for the effects of demographic characteristics including maternal health insurance coverage, employment status, education, and preferred language, children of depressed women (N = 1025) were significantly more likely than children of non-depressed women (N = 3564) to receive a "poor" or "fair" maternal rating of general health (OR 2.34), eat fewer vegetables (IRR: 0.94) more sweets (IRR: 1.20) and sugary drinks daily (IRR: 1.32), and consume fast food more often (OR 1.21). These children were also less likely to have health insurance (OR 1.59) and more likely to receive medical care from a public medical clinic or hospital emergency room (OR 1.30). Reduced maternal engagement partially mediated associations between maternal depressive symptoms and several child health outcomes including poor diet, health insurance coverage, and use of public medical services. Conclusions for Practice Maternal depressive symptoms are associated with poor health among preschool-age children in low-income families. Prevention, screening, and treatment efforts aimed at reducing the prevalence of maternal depression may positively affect young children's health.
Mantonanaki, Magdalini; Koletsi-Kounari, Haroula; Mamai-Homata, Eleni; Papaioannou, William
2013-04-01
To assess dental caries and use of dental services experience in 5-year-old children attending public kindergartens in Attica, Greece and to examine the influence of certain socioeconomic factors and living conditions as well as dental behaviours and attitudes. In this cross-sectional study, a random and stratified sample of 605 Greek children was examined using decayed, missing, filled tooth surfaces and simplified debris indices. The use of dental services was measured by children's dental visits (any dental visit up to the age of 5 years). Care Index was also calculated. Risk indicators were assessed by a questionnaire. Zero-inflated Poisson and Logistic Regression Analysis were generated to test statistical significant associations. The prevalence of dental caries was 16.5%. Care Index was 32% and dental visits were reported for the 84% of the children. Medium Socio-Economic Level (SEL) was associated with no detectable caries. High SEL was related to decreased decayed, missing, filled teeth values, while female gender and rented houses had the opposite effect. The age of the mother (35-39 years) and the higher SEL were related to higher levels of dental services use. It is suggested that there are differences in the experience of dental caries and use of dental services among preschool children in Attica, which are related to demographic, socioeconomic factors and living conditions. Dental public polices should focus on groups with specific characteristics in order to improve oral health levels of disease-susceptible populations. © 2013 FDI World Dental Federation.
Stability issues of nonlocal gravity during primordial inflation
NASA Astrophysics Data System (ADS)
Belgacem, Enis; Cusin, Giulia; Foffa, Stefano; Maggiore, Michele; Mancarella, Michele
2018-01-01
We study the cosmological evolution of some nonlocal gravity models, when the initial conditions are set during a phase of primordial inflation. We examine in particular three models, the so-called RT, RR and Δ4 models, previously introduced by our group. We find that, during inflation, the RT model has a viable background evolution, but at the level of cosmological perturbations develops instabilities that make it nonviable. In contrast, the RR and Δ4 models have a viable evolution even when their initial conditions are set during a phase of primordial inflation.
Nonthermal gravitino production in tribrid inflation
NASA Astrophysics Data System (ADS)
Antusch, Stefan; Dutta, Koushik
2015-10-01
We investigate nonthermal gravitino production after tribrid inflation in supergravity, which is a variant of supersymmetric hybrid inflation where three fields are involved in the inflationary model and where the inflaton field resides in the matter sector of the theory. In contrast to conventional supersymmetric hybrid inflation, where nonthermal gravitino production imposes severe constraints on the inflationary model, we find that the "nonthermal gravitino problem" is generically absent in models of tribrid inflation, mainly due to two effects: (i) With the inflaton in tribrid inflation (after inflation) being lighter than the waterfall field, the latter has a second decay channel with a much larger rate than for the decay into gravitinos. This reduces the branching ratio for the decay of the waterfall field into gravitinos. (ii) The inflaton generically decays later than the waterfall field, and it does not produce gravitinos when it decays. This leads to a dilution of the gravitino population from the decays of the waterfall field. The combination of both effects generically leads to a strongly reduced gravitino production in tribrid inflation.
Fractional poisson--a simple dose-response model for human norovirus.
Messner, Michael J; Berger, Philip; Nappier, Sharon P
2014-10-01
This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
A Pearson Effective Potential for Monte Carlo Simulation of Quantum Confinement Effects in nMOSFETs
NASA Astrophysics Data System (ADS)
Jaud, Marie-Anne; Barraud, Sylvain; Saint-Martin, Jérôme; Bournel, Arnaud; Dollfus, Philippe; Jaouen, Hervé
2008-12-01
A Pearson Effective Potential model for including quantization effects in the simulation of nanoscale nMOSFETs has been developed. This model, based on a realistic description of the function representing the non zero-size of the electron wave packet, has been used in a Monte-Carlo simulator for bulk, single gate SOI and double-gate SOI devices. In the case of SOI capacitors, the electron density has been computed for a large range of effective field (between 0.1 MV/cm and 1 MV/cm) and for various silicon film thicknesses (between 5 nm and 20 nm). A good agreement with the Schroedinger-Poisson results is obtained both on the total inversion charge and on the electron density profiles. The ability of an Effective Potential approach to accurately reproduce electrostatic quantum confinement effects is clearly demonstrated.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.
Inflation of Unreefed and Reefed Extraction Parachutes
NASA Technical Reports Server (NTRS)
Ray, Eric S.; Varela, Jose G.
2015-01-01
Data from the Orion and several other test programs have been used to reconstruct inflation parameters for 28 ft Do extraction parachutes as well as the parent aircraft pitch response during extraction. The inflation force generated by extraction parachutes is recorded directly during tow tests but is usually inferred from the payload accelerometer during Low Velocity Airdrop Delivery (LVAD) flight test extractions. Inflation parameters are dependent on the type of parent aircraft, number of canopies, and standard vs. high altitude extraction conditions. For standard altitudes, single canopy inflations are modeled as infinite mass, but the non-symmetric inflations in a cluster are modeled as finite mass. High altitude extractions have necessitated reefing the extraction parachutes, which are best modeled as infinite mass for those conditions. Distributions of aircraft pitch profiles and inflation parameters have been generated for use in Monte Carlo simulations of payload extractions.
2009-04-01
Fund ( IMF ) lending has been suspended since 2000 due to nonpayment of arrears, and foreign currency for essential imports, particularly fuel, is in...remain bleak in the near term. Ignoring the advice of the IMF , the government refused to devalue the official exchange rate. Instead, in June 2006...Gono devalued the country’s currency , the Zimbabwe dollar, removing three zeros in an effort to mitigate inflation. Under “Operation Sunrise,” the
No control genes required: Bayesian analysis of qRT-PCR data.
Matz, Mikhail V; Wright, Rachel M; Scott, James G
2013-01-01
Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R.
Homogeneous cosmological models and new inflation
NASA Technical Reports Server (NTRS)
Turner, Michael S.; Widrow, Lawrence M.
1986-01-01
The promise of the inflationary-universe scenario is to free the present state of the universe from extreme dependence upon initial data. Paradoxically, inflation is usually analyzed in the context of the homogeneous and isotropic Robertson-Walker cosmological models. It is shown that all but a small subset of the homogeneous models undergo inflation. Any initial anisotropy is so strongly damped that if sufficient inflation occurs to solve the flatness and horizon problems, the universe today would still be very isotropic.
Jochem, Warren C; Razzaque, Abdur; Root, Elisabeth Dowling
2016-09-01
Respiratory infections continue to be a public health threat, particularly to young children in developing countries. Understanding the geographic patterns of diseases and the role of potential risk factors can help improve future mitigation efforts. Toward this goal, this paper applies a spatial scan statistic combined with a zero-inflated negative-binomial regression to re-examine the impacts of a community-based treatment program on the geographic patterns of acute lower respiratory infection (ALRI) mortality in an area of rural Bangladesh. Exposure to arsenic-contaminated drinking water is also a serious threat to the health of children in this area, and the variation in exposure to arsenic must be considered when evaluating the health interventions. ALRI mortality data were obtained for children under 2 years old from 1989 to 1996 in the Matlab Health and Demographic Surveillance System. This study period covers the years immediately following the implementation of an ALRI control program. A zero-inflated negative binomial (ZINB) regression model was first used to simultaneously estimate mortality rates and the likelihood of no deaths in groups of related households while controlling for socioeconomic status, potential arsenic exposure, and access to care. Next a spatial scan statistic was used to assess the location and magnitude of clusters of ALRI mortality. The ZINB model was used to adjust the scan statistic for multiple social and environmental risk factors. The results of the ZINB models and spatial scan statistic suggest that the ALRI control program was successful in reducing child mortality in the study area. Exposure to arsenic-contaminated drinking water was not associated with increased mortality. Higher socioeconomic status also significantly reduced mortality rates, even among households who were in the treatment program area. Community-based ALRI interventions can be effective at reducing child mortality, though socioeconomic factors may continue to influence mortality patterns. The combination of spatial and non-spatial methods used in this paper has not been applied previously in the literature, and this study demonstrates the importance of such approaches for evaluating and improving public health intervention programs.
Phenomenology of fermion production during axion inflation
NASA Astrophysics Data System (ADS)
Adshead, Peter; Pearce, Lauren; Peloso, Marco; Roberts, Michael A.; Sorbo, Lorenzo
2018-06-01
We study the production of fermions through a derivative coupling with a pseudoscalar inflaton and the effects of the produced fermions on the scalar primordial perturbations. We present analytic results for the modification of the scalar power spectrum due to the produced fermions, and we estimate the amplitude of the non-Gaussianities in the equilateral regime. Remarkably, we find a regime where the effect of the fermions gives the dominant contribution to the scalar spectrum while the amplitude of the bispectrum is small and in agreement with observation. We also note the existence of a regime in which the backreaction of the fermions on the evolution of the zero-mode of the inflaton can lead to inflation even if the potential of the inflaton is steep and does not satisfy the slow-roll conditions.
Kähler-driven tribrid inflation
NASA Astrophysics Data System (ADS)
Antusch, Stefan; Nolde, David
2012-11-01
We discuss a new class of tribrid inflation models in supergravity, where the shape of the inflaton potential is dominated by effects from the Kähler potential. Tribrid inflation is a variant of hybrid inflation which is particularly suited for connecting inflation with particle physics, since the inflaton can be a D-flat combination of charged fields from the matter sector. In models of tribrid inflation studied so far, the inflaton potential was dominated by either loop corrections or by mixing effects with the waterfall field (as in "pseudosmooth" tribrid inflation). Here we investigate the third possibility, namely that tribrid inflation is dominantly driven by effects from higher-dimensional operators of the Kähler potential. We specify for which superpotential parameters the new regime is realized and show how it can be experimentally distinguished from the other two (loop-driven and "pseudosmooth") regimes.
The evolution of rotating very massive stars with LMC composition
NASA Astrophysics Data System (ADS)
Köhler, K.; Langer, N.; de Koter, A.; de Mink, S. E.; Crowther, P. A.; Evans, C. J.; Gräfener, G.; Sana, H.; Sanyal, D.; Schneider, F. R. N.; Vink, J. S.
2015-01-01
Context. With growing evidence for the existence of very massive stars at subsolar metallicity, there is an increased need for corresponding stellar evolution models. Aims: We present a dense model grid with a tailored input chemical composition appropriate for the Large Magellanic Cloud (LMC). Methods: We use a one-dimensional hydrodynamic stellar evolution code, which accounts for rotation, transport of angular momentum by magnetic fields, and stellar wind mass loss to compute our detailed models. We calculate stellar evolution models with initial masses from 70 to 500 M⊙ and with initial surface rotational velocities from 0 to 550 km s-1, covering the core-hydrogen burning phase of evolution. Results: We find our rapid rotators to be strongly influenced by rotationally induced mixing of helium, with quasi-chemically homogeneous evolution occurring for the fastest rotating models. Above 160 M⊙, homogeneous evolution is also established through mass loss, producing pure helium stars at core hydrogen exhaustion independent of the initial rotation rate. Surface nitrogen enrichment is also found for slower rotators, even for stars that lose only a small fraction of their initial mass. For models above ~150 M⊙ at zero age, and for models in the whole considered mass range later on, we find a considerable envelope inflation due to the proximity of these models to their Eddington limit. This leads to a maximum ZAMS surface temperature of ~56 000 K, at ~180 M⊙, and to an evolution of stars in the mass range 50 M⊙...100 M⊙ to the regime of luminous blue variables in the Hertzsprung-Russell diagram with high internal Eddington factors. Inflation also leads to decreasing surface temperatures during the chemically homogeneous evolution of stars above ~180 M⊙. Conclusions: The cool surface temperatures due to the envelope inflation in our models lead to an enhanced mass loss, which prevents stars at LMC metallicity from evolving into pair-instability supernovae. The corresponding spin-down will also prevent very massive LMC stars to produce long-duration gamma-ray bursts, which might, however, originate from lower masses. The dataset of the presented stellar evolution models is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A71Appendices are available in electronic form at http://www.aanda.org
Inflation in the mixed Higgs-R2 model
NASA Astrophysics Data System (ADS)
He, Minxi; Starobinsky, Alexei A.; Yokoyama, Jun'ichi
2018-05-01
We analyze a two-field inflationary model consisting of the Ricci scalar squared (R2) term and the standard Higgs field non-minimally coupled to gravity in addition to the Einstein R term. Detailed analysis of the power spectrum of this model with mass hierarchy is presented, and we find that one can describe this model as an effective single-field model in the slow-roll regime with a modified sound speed. The scalar spectral index predicted by this model coincides with those given by the R2 inflation and the Higgs inflation implying that there is a close relation between this model and the R2 inflation already in the original (Jordan) frame. For a typical value of the self-coupling of the standard Higgs field at the high energy scale of inflation, the role of the Higgs field in parameter space involved is to modify the scalaron mass, so that the original mass parameter in the R2 inflation can deviate from its standard value when non-minimal coupling between the Ricci scalar and the Higgs field is large enough.
First-order inflation. [in cosmology
NASA Technical Reports Server (NTRS)
Turner, Michael S.
1992-01-01
I discuss the most recent model of inflation. In first-order inflation the inflationary epoch is associated with a first-order phase transition, with the most likely candidate being GUT symmetry breaking. The transition from the false-vacuum inflationary phase to the true-vacuum radiation-dominated phase proceeds through the nucleation and percolation of true-vacuum bubbles. The first successful and simplest model of first-order inflation, extended inflation, is discussed in some detail: evolution of the cosmic-scale factor, reheating, density perturbations, and the production of gravitational waves both from quantum fluctuations and bubble collisions. Particular attention is paid to the most critical issue in any model of first-order inflation: the requirements on the nucleation rate to ensure a graceful transition from the inflationary phase to the radiation-dominated phase.
Yang, Dong
2016-01-01
Background Universal Zero-Markup Drug Policy (UZMDP) mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China’s National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST). Methods This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT) 2009–2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively. Results In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH) in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively) and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively), while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4%) and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%). Conclusion Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital. PMID:27627811
Tian, Wei; Yuan, Jiangfan; Yang, Dong; Zhang, Lanjing
2016-01-01
Universal Zero-Markup Drug Policy (UZMDP) mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China's National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST). This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT) 2009-2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively. In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH) in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively) and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively), while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4%) and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%). Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital.
Topological inflation with graceful exit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marunović, Anja; Prokopec, Tomislav, E-mail: a.marunovic@uu.nl, E-mail: t.prokopec@uu.nl
We investigate a class of models of topological inflation in which a super-Hubble-sized global monopole seeds inflation. These models are attractive since inflation starts from rather generic initial conditions, but their not so attractive feature is that, unless symmetry is again restored, inflation never ends. In this work we show that, in presence of another nonminimally coupled scalar field, that is both quadratically and quartically coupled to the Ricci scalar, inflation naturally ends, representing an elegant solution to the graceful exit problem of topological inflation. While the monopole core grows during inflation, the growth stops after inflation, such that themore » monopole eventually enters the Hubble radius, and shrinks to its Minkowski space size, rendering it immaterial for the subsequent Universe's dynamics. Furthermore, we find that our model can produce cosmological perturbations that source CMB temperature fluctuations and seed large scale structure statistically consistent (within one standard deviation) with all available data. In particular, for small and (in our convention) negative nonminimal couplings, the scalar spectral index can be as large as n {sub s} ≅ 0.955, which is about one standard deviation lower than the central value quoted by the most recent Planck Collaboration.« less
Mutated hilltop inflation revisited
NASA Astrophysics Data System (ADS)
Pal, Barun Kumar
2018-05-01
In this work we re-investigate pros and cons of mutated hilltop inflation. Applying Hamilton-Jacobi formalism we solve inflationary dynamics and find that inflation goes on along the {W}_{-1} branch of the Lambert function. Depending on the model parameter mutated hilltop model renders two types of inflationary solutions: one corresponds to small inflaton excursion during observable inflation and the other describes large field inflation. The inflationary observables from curvature perturbation are in tune with the current data for a wide range of the model parameter. The small field branch predicts negligible amount of tensor to scalar ratio r˜ O(10^{-4}), while the large field sector is capable of generating high amplitude for tensor perturbations, r˜ O(10^{-1}). Also, the spectral index is almost independent of the model parameter along with a very small negative amount of scalar running. Finally we find that the mutated hilltop inflation closely resembles the α -attractor class of inflationary models in the limit of α φ ≫ 1.
Primordial perturbations in multi-scalar inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abedi, Habib; Abbassi, Amir M., E-mail: h.abedi@ut.ac.ir, E-mail: amabasi@khayam.ut.ac.ir
2017-07-01
Multiple field models of inflation exhibit new features than single field models. In this work, we study the hierarchy of parameters based on Hubble expansion rate in curved field space and derive the system of flow equations that describe their evolutions. Then we focus on obtaining derivatives of number of e-folds with respect to scalar fields during inflation and at hypersurface of the end of inflation.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-23
... inflatable portion of the restraint system will rely on sensors to electronically activate the inflator for... inflatable restraint system relies on sensors to electronically activate the inflator for deployment. These sensors could be susceptible to inadvertent activation, causing deployment in a potentially unsafe manner...
The best-fit universe. [cosmological models
NASA Technical Reports Server (NTRS)
Turner, Michael S.
1991-01-01
Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.
Inflation and the Capital Budgeting Process.
1985-04-01
model . [10:22] Friend, Landskroner and Losq assert that the traditional capital asset pricing model *( CAPM ...value (NPV) capital budgeting model is used extensively in this report and the Consumer Price Index - Urban (CPI-U) and the Wholesale Price Index (WPI...general price level adjustments into the capital budgeting model . The consideration of inflation risk is also warranted. The effects of inflation
Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.
Zhang, Jiachao; Hirakawa, Keigo
2017-04-01
This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.
Alchemical inflation: inflaton turns into Higgs
NASA Astrophysics Data System (ADS)
Nakayama, Kazunori; Takahashi, Fuminobu
2012-11-01
We propose a new inflation model in which a gauge singlet inflaton turns into the Higgs condensate after inflation. The inflationary path is characterized by a moduli space of supersymmetric vacua spanned by the inflaton and Higgs field. The inflation energy scale is related to the soft supersymmetry breaking, and the Hubble parameter during inflation is smaller than the gravitino mass. The initial condition for the successful inflation is naturally realized by the pre-inflation in which the Higgs plays a role of the waterfall field.
Sequestering the standard model vacuum energy.
Kaloper, Nemanja; Padilla, Antonio
2014-03-07
We propose a very simple reformulation of general relativity, which completely sequesters from gravity all of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the Universe. For them to be nonzero, the Universe should be finite in spacetime. If this matter is the standard model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the Universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale, and curvature scale, and early Universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that w(DE) ≃ -1 is a transient, and that the Universe will collapse in the future.
A fractal comparison of real and Austrian business cycle models
NASA Astrophysics Data System (ADS)
Mulligan, Robert F.
2010-06-01
Rescaled range and power spectral density analysis are applied to examine a diverse set of macromonetary data for fractal character and stochastic dependence. Fractal statistics are used to evaluate two competing models of the business cycle, Austrian business cycle theory and real business cycle theory. Strong evidence is found for antipersistent stochastic dependence in transactions money (M1) and components of the monetary aggregates most directly concerned with transactions, which suggests an activist monetary policy. Savings assets exhibit persistent long memory, as do those monetary aggregates which include savings assets, such as savings money (M2), M2 minus small time deposits, and money of zero maturity (MZM). Virtually all measures of economic activity display antipersistence, and this finding is invariant to whether the measures are adjusted for inflation, including real gross domestic product, real consumption expenditures, real fixed private investment, and labor productivity. This strongly disconfirms real business cycle theory.
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
Inflation in the standard cosmological model
NASA Astrophysics Data System (ADS)
Uzan, Jean-Philippe
2015-12-01
The inflationary paradigm is now part of the standard cosmological model as a description of its primordial phase. While its original motivation was to solve the standard problems of the hot big bang model, it was soon understood that it offers a natural theory for the origin of the large-scale structure of the universe. Most models rely on a slow-rolling scalar field and enjoy very generic predictions. Besides, all the matter of the universe is produced by the decay of the inflaton field at the end of inflation during a phase of reheating. These predictions can be (and are) tested from their imprint of the large-scale structure and in particular the cosmic microwave background. Inflation stands as a window in physics where both general relativity and quantum field theory are at work and which can be observationally studied. It connects cosmology with high-energy physics. Today most models are constructed within extensions of the standard model, such as supersymmetry or string theory. Inflation also disrupts our vision of the universe, in particular with the ideas of chaotic inflation and eternal inflation that tend to promote the image of a very inhomogeneous universe with fractal structure on a large scale. This idea is also at the heart of further speculations, such as the multiverse. This introduction summarizes the connections between inflation and the hot big bang model and details the basics of its dynamics and predictions. xml:lang="fr"
Parameter-expanded data augmentation for Bayesian analysis of capture-recapture models
Royle, J. Andrew; Dorazio, Robert M.
2012-01-01
Data augmentation (DA) is a flexible tool for analyzing closed and open population models of capture-recapture data, especially models which include sources of hetereogeneity among individuals. The essential concept underlying DA, as we use the term, is based on adding "observations" to create a dataset composed of a known number of individuals. This new (augmented) dataset, which includes the unknown number of individuals N in the population, is then analyzed using a new model that includes a reformulation of the parameter N in the conventional model of the observed (unaugmented) data. In the context of capture-recapture models, we add a set of "all zero" encounter histories which are not, in practice, observable. The model of the augmented dataset is a zero-inflated version of either a binomial or a multinomial base model. Thus, our use of DA provides a general approach for analyzing both closed and open population models of all types. In doing so, this approach provides a unified framework for the analysis of a huge range of models that are treated as unrelated "black boxes" and named procedures in the classical literature. As a practical matter, analysis of the augmented dataset by MCMC is greatly simplified compared to other methods that require specialized algorithms. For example, complex capture-recapture models of an augmented dataset can be fitted with popular MCMC software packages (WinBUGS or JAGS) by providing a concise statement of the model's assumptions that usually involves only a few lines of pseudocode. In this paper, we review the basic technical concepts of data augmentation, and we provide examples of analyses of closed-population models (M 0, M h , distance sampling, and spatial capture-recapture models) and open-population models (Jolly-Seber) with individual effects.
Searle, K R; Blackwell, A; Falconer, D; Sullivan, M; Butler, A; Purse, B V
2013-04-01
Interpreting spatial patterns in the abundance of species over time is a fundamental cornerstone of ecological research. For many species, this type of analysis is hampered by datasets that contain a large proportion of zeros, and data that are overdispersed and spatially autocorrelated. This is particularly true for insects, for which abundance data can fluctuate from zero to many thousands in the space of weeks. Increasingly, an understanding of the ways in which environmental variation drives spatial and temporal patterns in the distribution, abundance and phenology of insects is required for management of pests and vector-borne diseases. In this study, we combine the use of smoothing techniques and generalised linear mixed models to relate environmental drivers to key phenological patterns of two species of biting midges, Culicoides pulicaris and C. impunctatus, of which C. pulicaris has been implicated in transmission of bluetongue in Europe. In so doing, we demonstrate analytical tools for linking the phenology of species with key environmental drivers, despite using a relatively small dataset containing overdispersed and zero-inflated data. We demonstrate the importance of landcover and climatic variables in determining the seasonal abundance of these two vector species, and highlight the need for more empirical data on the effects of temperature and precipitation on the life history traits of palearctic Culicoides spp. in Europe.
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
A design pathfinder with material correlation points for inflatable systems
NASA Astrophysics Data System (ADS)
Fulcher, Jared Terrell
The incorporation of inflatable structures into aerospace systems can produce significant advantages in stowed volume to mechanical effectiveness and overall weight. Many applications of these ultra-lightweight systems are designed to precisely control internal or external surfaces, or both, to achieve desired performance. The modeling of these structures becomes complex due to the material nonlinearities inherent to the majority of construction materials used in inflatable structures. Furthermore, accurately modeling the response and behavior of the interfacing boundaries that are common to many inflatable systems will lead to better understanding of the entire class of structures. The research presented involved using nonlinear finite element simulations correlated with photogrammetry testing to develop a procedure for defining material properties for commercially available polyurethane-coated woven nylon fabric, which is representative of coated materials that have been proven materials for use in many inflatable systems. Further, the new material model was used to design and develop an inflatable pathfinder system which employs only internal pressure to control an assembly of internal membranes. This canonical inflatable system will be used for exploration and development of general understanding of efficient design methodology and analysis of future systems. Canonical structures are incorporated into the design of the phased pathfinder system to allow for more universal insight. Nonlinear finite element simulations were performed to evaluate the effect of various boundary conditions, loading configurations, and material orientations on the geometric precision of geometries representing typical internal/external surfaces commonly incorporated into inflatable pathfinder system. The response of the inflatable system to possible damage was also studied using nonlinear finite element simulations. Development of a correlated material model for analysis of the inflatable pathfinder system has improved the efficiency of design and analysis techniques of future inflatable structures. KEYWORDS: Nonlinear Finite Element, Inflatable Structures, Gossamer Space Systems, Photogrammetry Measurements, Coated Woven Fabric.
Inflationary tensor fossils in large-scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimastrogiovanni, Emanuela; Fasiello, Matteo; Jeong, Donghui
Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to bemore » satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.« less
On the Inverse Mapping of the Formal Symplectic Groupoid of a Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2004-10-01
To each natural star product on a Poisson manifold $M$ we associate an antisymplectic involutive automorphism of the formal neighborhood of the zero section of the cotangent bundle of $M$. If $M$ is symplectic, this mapping is shown to be the inverse mapping of the formal symplectic groupoid of the star product. The construction of the inverse mapping involves modular automorphisms of the star product.
Fujikawa, Hiroshi
2017-01-01
Microbial concentration in samples of a food product lot has been generally assumed to follow the log-normal distribution in food sampling, but this distribution cannot accommodate the concentration of zero. In the present study, first, a probabilistic study with the most probable number (MPN) technique was done for a target microbe present at a low (or zero) concentration in food products. Namely, based on the number of target pathogen-positive samples in the total samples of a product found by a qualitative, microbiological examination, the concentration of the pathogen in the product was estimated by means of the MPN technique. The effects of the sample size and the total sample number of a product were then examined. Second, operating characteristic (OC) curves for the concentration of a target microbe in a product lot were generated on the assumption that the concentration of a target microbe could be expressed with the Poisson distribution. OC curves for Salmonella and Cronobacter sakazakii in powdered formulae for infants and young children were successfully generated. The present study suggested that the MPN technique and the Poisson distribution would be useful for qualitative microbiological test data analysis for a target microbe whose concentration in a lot is expected to be low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guendelman, E. I.; Kaganovich, A. B.
2007-04-15
The dilaton-gravity sector of the two-measures field theory (TMT) is explored in detail in the context of spatially flat Friedman-Robertson-Walker (FRW) cosmology. The model possesses scale invariance which is spontaneously broken due to the intrinsic features of the TMT dynamics. The dilaton {phi} dependence of the effective Lagrangian appears only as a result of the spontaneous breakdown of the scale invariance. If no fine-tuning is made, the effective {phi}-Lagrangian p({phi},X) depends quadratically upon the kinetic term X. Hence TMT represents an explicit example of the effective k-essence resulting from first principles without any exotic term in the underlying action intendedmore » for obtaining this result. Depending of the choice of regions in the parameter space (but without fine-tuning), TMT exhibits different possible outputs for cosmological dynamics: (a) Absence of initial singularity of the curvature while its time derivative is singular. This is a sort of sudden singularities studied by Barrow on purely kinematic grounds. (b) Power law inflation in the subsequent stage of evolution. Depending on the region in the parameter space the inflation ends with a graceful exit either into the state with zero cosmological constant (CC) or into the state driven by both a small CC and the field {phi} with a quintessencelike potential. (c) Possibility of resolution of the old CC problem. From the point of view of TMT, it becomes clear why the old CC problem cannot be solved (without fine-tuning) in conventional field theories. (d) TMT enables two ways for achieving small CC without fine-tuning of dimensionful parameters: either by a seesaw type mechanism or due to a correspondence principle between TMT and conventional field theories (i.e. theories with only the measure of integration {radical}(-g) in the action). (e) There is a wide range of the parameters such that in the late time universe: the equation of state w=p/{rho}<-1; w asymptotically (as t{yields}{infinity}) approaches -1 from below; {rho} approaches a constant, the smallness of which does not require fine-tuning of dimensionful parameters.« less
Synchrotron x-ray imaging of pulmonary alveoli in respiration in live intact mice
NASA Astrophysics Data System (ADS)
Chang, Soeun; Kwon, Namseop; Kim, Jinkyung; Kohmura, Yoshiki; Ishikawa, Tetsuya; Rhee, Chin Kook; Je, Jung Ho; Tsuda, Akira
2015-03-01
Despite nearly a half century of studies, it has not been fully understood how pulmonary alveoli, the elementary gas exchange units in mammalian lungs, inflate and deflate during respiration. Understanding alveolar dynamics is crucial for treating patients with pulmonary diseases. In-vivo, real-time visualization of the alveoli during respiration has been hampered by active lung movement. Previous studies have been therefore limited to alveoli at lung apices or subpleural alveoli under open thorax conditions. Here we report direct and real-time visualization of alveoli of live intact mice during respiration using tracking X-ray microscopy. Our studies, for the first time, determine the alveolar size of normal mice in respiration without positive end expiratory pressure as 58 +/- 14 (mean +/- s.d.) μm on average, accurately measured in the lung bases as well as the apices. Individual alveoli of normal lungs clearly show heterogeneous inflation from zero to ~25% (6.7 +/- 4.7% (mean +/- s.d.)) in size. The degree of inflation is higher in the lung bases (8.7 +/- 4.3% (mean +/- s.d.)) than in the apices (5.7 +/- 3.2% (mean +/- s.d.)). The fraction of the total tidal volume allocated for alveolar inflation is 34 +/- 3.8% (mean +/- s.e.m). This study contributes to the better understanding of alveolar dynamics and helps to develop potential treatment options for pulmonary diseases.
Micromechanics and poroelasticity of hydrated cellulose networks.
Lopez-Sanchez, P; Rincon, Mauricio; Wang, D; Brulhart, S; Stokes, J R; Gidley, M J
2014-06-09
The micromechanics of cellulose hydrogels have been investigated using a new rheological experimental approach, combined with simulation using a poroelastic constitutive model. A series of mechanical compression steps at different strain rates were performed as a function of cellulose hydrogel thickness, combined with small amplitude oscillatory shear after each step to monitor the viscoelasticity of the sample. During compression, bacterial cellulose hydrogels behaved as anisotropic materials with near zero Poisson's ratio. The micromechanics of the hydrogels altered with each compression as water was squeezed out of the structure, and microstructural changes were strain rate-dependent, with increased densification of the cellulose network and increased cellulose fiber aggregation observed for slower compressive strain rates. A transversely isotropic poroelastic model was used to explain the observed micromechanical behavior, showing that the mechanical properties of cellulose networks in aqueous environments are mainly controlled by the rate of water movement within the structure.
The observational constraint on constant-roll inflation
NASA Astrophysics Data System (ADS)
Gao, Qing
2018-07-01
We discuss the constant-roll inflation with constant ɛ2 and constant \\bar η . By using the method of Bessel function approximation, the analytical expressions for the scalar and tensor power spectra, the scalar and tensor spectral tilts, and the tensor to scalar ratio are derived up to the first order of ɛ1. The model with constant ɛ2 is ruled out by the observations at the 3σ confidence level, and the model with constant \\bar η is consistent with the observations at the 1σ confidence level. The potential for the model with constant \\bar η is also obtained from the Hamilton-Jacobi equation. Although the observations constrain the constant-roll inflation to be the slow-roll inflation, the n s- r results from the constant-roll inflation are not the same as those from the slow-roll inflation even when \\bar η 0.01.
Bayesian analysis of volcanic eruptions
NASA Astrophysics Data System (ADS)
Ho, Chih-Hsiang
1990-10-01
The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.
Symmetry breaking patterns for inflation
NASA Astrophysics Data System (ADS)
Klein, Remko; Roest, Diederik; Stefanyszyn, David
2018-06-01
We study inflationary models where the kinetic sector of the theory has a non-linearly realised symmetry which is broken by the inflationary potential. We distinguish between kinetic symmetries which non-linearly realise an internal or space-time group, and which yield a flat or curved scalar manifold. This classification leads to well-known inflationary models such as monomial inflation and α-attractors, as well as a new model based on fixed couplings between a dilaton and many axions which non-linearly realises higher-dimensional conformal symmetries. In this model, inflation can be realised along the dilatonic direction, leading to a tensor-to-scalar ratio r ˜ 0 .01 and a spectral index n s ˜ 0 .975. We refer to the new model as ambient inflation since inflation proceeds along an isometry of an anti-de Sitter ambient space-time, which fully determines the kinetic sector.
Replenishment policy for an inventory model under inflation
NASA Astrophysics Data System (ADS)
Singh, Vikramjeet; Saxena, Seema; Singh, Pushpinder; Mishra, Nitin Kumar
2017-07-01
The purpose of replenishment is to keep the flow of inventory in the system. To determine an optimal replenishment policy is a great challenge in developing an inventory model. Inflation is defined as the rate at which the prices of goods and services are rising over a time period. The cost parameters are affected by the rate of inflation. High rate of inflation affects the organizations financial conditions. Based on the above backdrop the present paper proposes the retailers replenishment policy for deteriorating items with different cycle lengths under inflation. The shortages are partially backlogged. At last numerical examples validate the results.
NASA Astrophysics Data System (ADS)
Kendrick, Jackie Evan; Smith, Rosanna; Sammonds, Peter; Meredith, Philip G.; Dainty, Matthew; Pallister, John S.
2013-07-01
Stratovolcanoes and lava domes are particularly susceptible to sector collapse resulting from wholesale rock failure as a consequence of decreasing rock strength. Here, we provide insights into the influence of thermal and cyclic stressing on the strength and mechanical properties of volcanic rocks. Specifically, this laboratory study examines the properties of samples from Mount St. Helens; chosen because its strength and stability have played a key role in its history, influencing the character of the infamous 1980 eruption. We find that thermal stressing exerts different effects on the strengths of different volcanic units; increasing the heterogeneity of rocks in situ. Increasing the uniaxial compressive stress generates cracking, the timing and magnitude of which was monitored via acoustic emission (AE) output during our experiments. AEs accelerated in the approach to failure, sometimes following the pattern predicted by the failure forecast method (Kilburn 2003). Crack damage during the experiments was tracked using the evolving static Young's modulus and Poisson's ratio, which represent the quasi-static deformation in volcanic edifices more accurately than dynamic elastic moduli which are usually implemented in volcanic models. Cyclic loading of these rocks resulted in a lower failure strength, confirming that volcanic rocks may be weakened by repeated inflation and deflation of the volcanic edifice. Additionally, volcanic rocks in this study undergo significant elastic hysteresis; in some instances, a material may fail at a stress lower than the peak stress which has previously been endured. Thus, a volcanic dome repeatedly inflated and deflated may progressively weaken, possibly inducing failure without necessarily exceeding earlier conditions.
Dynamic Characterization of an Inflatable Concentrator for Solar Thermal Propulsion
NASA Technical Reports Server (NTRS)
Leigh, Larry; Hamidzadeh, Hamid; Tinker, Michael L.; Rodriguez, Pedro I. (Technical Monitor)
2001-01-01
An inflatable structural system that is a technology demonstrator for solar thermal propulsion and other applications is characterized for structural dynamic behavior both experimentally and computationally. The inflatable structure is a pressurized assembly developed for use in orbit to support a Fresnel lens or inflatable lenticular element for focusing sunlight into a solar thermal rocket engine. When the engine temperature reaches a pre-set level, the propellant is injected into the engine, absorbs heat from an exchanger, and is expanded through the nozzle to produce thrust. The inflatable structure is a passively adaptive system in that a regulator and relief valve are utilized to maintain pressure within design limits during the full range of orbital conditions. Modeling and test activities are complicated by the fact that the polyimide film material used for construction of the inflatable is nonlinear, with modulus varying as a function of frequency, temperature, and level of excitation. Modal vibration testing and finite element modeling are described in detail in this paper. The test database is used for validation and modification of the model. This work is highly significant because of the current interest in inflatable structures for space application, and because of the difficulty in accurately modeling such systems.
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Park, H M; Lee, J S; Kim, T W
2007-11-15
In the analysis of electroosmotic flows, the internal electric potential is usually modeled by the Poisson-Boltzmann equation. The Poisson-Boltzmann equation is derived from the assumption of thermodynamic equilibrium where the ionic distributions are not affected by fluid flows. Although this is a reasonable assumption for steady electroosmotic flows through straight microchannels, there are some important cases where convective transport of ions has nontrivial effects. In these cases, it is necessary to adopt the Nernst-Planck equation instead of the Poisson-Boltzmann equation to model the internal electric field. In the present work, the predictions of the Nernst-Planck equation are compared with those of the Poisson-Boltzmann equation for electroosmotic flows in various microchannels where the convective transport of ions is not negligible.
Starobinsky-like inflation and neutrino masses in a no-scale SO(10) model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, John; Theoretical Physics Department, CERN,CH-1211 Geneva 23; Garcia, Marcos A.G.
2016-11-08
Using a no-scale supergravity framework, we construct an SO(10) model that makes predictions for cosmic microwave background observables similar to those of the Starobinsky model of inflation, and incorporates a double-seesaw model for neutrino masses consistent with oscillation experiments and late-time cosmology. We pay particular attention to the behaviour of the scalar fields during inflation and the subsequent reheating.
Higgs Inflation in f(Φ, r) Theory
NASA Astrophysics Data System (ADS)
Chakravarty, Girish Kumar; Mohanty, Subhendra; Singh, Naveen K.
2014-02-01
We generalize the scalar-curvature coupling model ξΦ2R of Higgs inflation to ξΦaRb to study inflation. We compute the amplitude and spectral index of curvature perturbations generated during inflation and fix the parameters of the model by comparing these with the Planck + WP data. We find that if the scalar self-coupling λ is in the range 10-5-0.1, parameter a in the range 2.3-3.6 and b in the range 0.77-0.22 at the Planck scale, one can have a viable inflation model even for ξ ≃ 1. The tensor to scalar ratio r in this model is small and our model with scalar-curvature couplings is not ruled out by observational limits on r unlike the pure (λ )/(4) Φ 4 theory. By requiring the curvature coupling parameter to be of order unity, we have evaded the problem of unitarity violation in scalar-graviton scatterings which plague the ξΦ2R Higgs inflation models. We conclude that the Higgs field may still be a good candidate for being the inflaton in the early universe if one considers higher-dimensional curvature coupling.
Bursts of Self-Conscious Emotions in the Daily Lives of Emerging Adults.
Conroy, David E; Ram, Nilam; Pincus, Aaron L; Rebar, Amanda L
Self-conscious emotions play a role in regulating daily achievement strivings, social behavior, and health, but little is known about the processes underlying their daily manifestation. Emerging adults (n = 182) completed daily diaries for eight days and multilevel models were estimated to evaluate whether, how much, and why their emotions varied from day-to-day. Within-person variation in authentic pride was normally-distributed across people and days whereas the other emotions were burst-like and characterized by zero-inflated, negative binomial distributions. Perceiving social interactions as generally communal increased the odds of hubristic pride activation and reduced the odds of guilt activation; daily communal behavior reduced guilt intensity. Results illuminated processes through which meaning about the self-in-relation-to-others is constructed during a critical period of development.
How thermal inflation can save minimal hybrid inflation in supergravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimopoulos, Konstantinos; Owen, Charlotte
2016-10-12
Minimal hybrid inflation in supergravity has been ruled out by the 2015 Planck observations because the spectral index of the produced curvature perturbation falls outside observational bounds. To resurrect the model, a number of modifications have been put forward but many of them spoil the accidental cancellation that resolves the η-problem and require complicated Kähler constructions to counterbalance the lost cancellation. In contrast, in this paper the model is rendered viable by supplementing the scenario with a brief period of thermal inflation, which follows the reheating of primordial inflation. The scalar field responsible for thermal inflation requires a large non-zeromore » vacuum expectation value (VEV) and a flat potential. We investigate the VEV of such a flaton field and its subsequent effect on the inflationary observables. We find that, for large VEV, minimal hybrid inflation in supergravity produces a spectral index within the 1-σ Planck bound and a tensor-to-scalar ratio which may be observable in the near future. The mechanism is applicable to other inflationary models.« less
A simple, approximate model of parachute inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macha, J.M.
1992-11-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less
A simple, approximate model of parachute inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macha, J.M.
1992-01-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less
Domain wall and isocurvature perturbation problems in axion models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yoshino, Kazuyoshi; Yanagida, Tsutomu T., E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: tsutomu.tyanagida@ipmu.jp, E-mail: yoshino@icrr.u-tokyo.ac.jp
2013-11-01
Axion models have two serious cosmological problems, domain wall and isocurvature perturbation problems. In order to solve these problems we investigate the Linde's model in which the field value of the Peccei-Quinn (PQ) scalar is large during inflation. In this model the fluctuations of the PQ field grow after inflation through the parametric resonance and stable axionic strings may be produced, which results in the domain wall problem. We study formation of axionic strings using lattice simulations. It is found that in chaotic inflation the axion model is free from both the domain wall and the isocurvature perturbation problems ifmore » the initial misalignment angle θ{sub a} is smaller than O(10{sup −2}). Furthermore, axions can also account for the dark matter for the breaking scale v ≅ 10{sup 12−16} GeV and the Hubble parameter during inflation H{sub inf}∼<10{sup 11−12} GeV in general inflation models.« less
Intermittency and Alignment in Strong RMHD Turbulence
NASA Astrophysics Data System (ADS)
Chandran, B. D. G.; Schekochihin, A. A.; Mallet, A.
2015-12-01
Intermittency is one of the critical unsolved problems in solar-wind turbulence. Intermittency is important not just because it affects the observable properties of turbulence in the inertial range, but also because it modifies the nature of turbulent dissipation at small scales. In this talk, I will present recent work by colleagues A. Schekochihin, A. Mallet, and myself that focuses on the development of intermittency within the inertial range of solar-wind turbulence. We restrict our analysis to the transverse, non-compressive component of the turbulence. Previous work has shown that this component of the turbulence is anisotropic, varying most rapidly in directions perpendicular to the magnetic field. We argue that, deep within the inertial range, this component of the turbulence is well modeled by the equations of reduced magnetohydrodynamics (RMHD). We then develop an analytic model of intermittent, three-dimensional, strong, reduced magnetohydrodynamic turbulence with zero cross helicity. We take the fluctuation amplitudes to have a log-Poisson distribution and incorporate into the model a new phenomenology of scale-dependent dynamic alignment. The log-Poisson distribution in our model is characterized by two parameters. To calculate these parameters, we make use of two assumptions: that the energy cascade rate is independent of scale within the inertial range and that the most intense coherent structures at scale lambda are sheet-like with a volume filling factor proportional to lambda. We then compute the scalings of the power spectrum, the kurtosis, higher-order structure functions, and three different average alignment angles. We also carry out a direct numerical simulation of RMHD turbulence. The scalings in our model are similar to the scalings in this simulation as well as the structure-function scalings observed in the slow solar wind.
Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method
NASA Astrophysics Data System (ADS)
Prahutama, Alan; Sudarno
2018-05-01
The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).
Modified Regression Correlation Coefficient for Poisson Regression Model
NASA Astrophysics Data System (ADS)
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
NASA Technical Reports Server (NTRS)
Berkin, Andrew L.; Maeda, Kei-Ichi; Yokoyama, Junichi
1990-01-01
The cosmology resulting from two coupled scalar fields was studied, one which is either a new inflation or chaotic type inflation, and the other which has an exponentially decaying potential. Such a potential may appear in the conformally transformed frame of generalized Einstein theories like the Jordan-Brans-Dicke theory. The constraints necessary for successful inflation are examined. Conventional GUT models such as SU(5) were found to be compatible with new inflation, while restrictions on the self-coupling constant are significantly loosened for chaotic inflation.
NASA Technical Reports Server (NTRS)
Berkin, Andrew L.; Maeda, Kei-Ichi; Yokoyama, Jun'ichi
1990-01-01
The cosmology resulting from two coupled scalar fields was studied, one which is either a new inflation or chaotic type inflation, and the other which has an exponentially decaying potential. Such a potential may appear in the conformally transformed frame of generalized Einstein theories like the Jordan-Brans-Dicke theory. The constraints necessary for successful inflation are examined. Conventional GUT models such as SU(5) were found to be compatible with new inflation, while restrictions on the self-coupling constant are significantly loosened for chaotic inflation.
A viable logarithmic f(R) model for inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amin, M.; Khalil, S.; Salah, M.
2016-08-18
Inflation in the framework of f(R) modified gravity is revisited. We study the conditions that f(R) should satisfy in order to lead to a viable inflationary model in the original form and in the Einstein frame. Based on these criteria we propose a new logarithmic model as a potential candidate for f(R) theories aiming to describe inflation consistent with observations from Planck satellite (2015). The model predicts scalar spectral index 0.9615
Penno, Katie; Jandarov, Roman A; Sopirala, Madhuri M
2017-11-01
We studied the effectiveness of an ultraviolet C (UV-C) emitter in clinical settings and compared it with observed terminal disinfection. We cultured 22 hospital discharge rooms at a tertiary care academic medical center. Phase 1 (unobserved terminal disinfection) included cultures of 11 high-touch environmental surfaces (HTSs) after terminal room disinfection (AD) and after the use of a UV-C-emitting device (AUV). Phase 2 (observed terminal disinfection) included cultures before terminal room disinfection (BD), AD, and AUV. Zero-inflated Poisson regression compared mean colony forming units (CFU) between the groups. Two-sample proportion tests identified significance of the observed differences in proportions of thoroughly cleaned HTSs (CFU < 5). Significant P value was determined using the Bonferroni corrected threshold of α = .05/12 = .004. We obtained 594 samples. Risk of overall contamination was 0.48 times lower in the AUV group than in the AD group (P < .001), with 1.04 log 10 reduction. During phase 1, overall proportion of HTSs with <5 CFUs increased in AUV versus AD by 0.12 (P = .001). During phase 2, it increased in AD versus BD by 0.45 (P < .001), with no significant difference between AD and AUV (P = .02). Use of UV-C with standard cleaning significantly reduced microbial burden and improved the thoroughness of terminal disinfection. We found no further benefit to UV-C use if standard terminal disinfection was observed. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Israël, Natascha M.D.; VanLandeghem, Matthew M.; Denny, Shawn; Ingle, John; Patino, Reynaldo
2014-01-01
Prymnesium parvum (golden alga, GA) is a toxigenic harmful alga native to marine ecosystems that has also affected brackish inland waters. The first toxic bloom of GA in the western hemisphere occurred in the Pecos River, one of the saltiest rivers in North America. Environmental factors (water quality) associated with GA occurrence in this basin, however, have not been examined. Water quality and GA presence and abundance were determined at eight sites in the Pecos River basin with or without prior history of toxic blooms. Sampling was conducted monthly from January 2012 to July 2013. Specific conductance (salinity) varied spatiotemporally between 4408 and 73,786 mS/cm. Results of graphical, principal component (PCA), and zero-inflated Poisson (ZIP) regression analyses indicated that the incidence and abundance of GA are reduced as salinity increases spatiotemporally. LOWESS regression and correlation analyses of archived data for specific conductance and GA abundance at one of the study sites retrospectively confirmed the negative association between these variables. Results of PCA also suggested that at <15,000 mS/cm, GA was present at a relatively wide range of nutrient (nitrogen and phosphorus) concentrations whereas at higher salinity, GA was observed only at mid-to-high nutrient levels. Generally consistent with earlier studies, results of ZIP regression indicated that GA presence is positively associated with organic phosphorus and in samples where GA is present, GA abundance is positively associated with organic nitrogen and negatively associated with inorganic nitrogen. This is the first report of an inverse relation between salinity and GA presence and abundance in riverine waters and of interaction effects of salinity and nutrients in the field. These observations contribute to a more complete understanding of environmental conditions that influence GA distribution in inland waters.
Asay, Garrett R Beeler; Roy, Kakoli; Lang, Jason E; Payne, Rebecca L; Howard, David H
2016-10-06
Employers may incur costs related to absenteeism among employees who have chronic diseases or unhealthy behaviors. We examined the association between employee absenteeism and 5 conditions: 3 risk factors (smoking, physical inactivity, and obesity) and 2 chronic diseases (hypertension and diabetes). We identified 5 chronic diseases or risk factors from 2 data sources: MarketScan Health Risk Assessment and the Medical Expenditure Panel Survey (MEPS). Absenteeism was measured as the number of workdays missed because of sickness or injury. We used zero-inflated Poisson regression to estimate excess absenteeism as the difference in the number of days missed from work by those who reported having a risk factor or chronic disease and those who did not. Covariates included demographics (eg, age, education, sex) and employment variables (eg, industry, union membership). We quantified absenteeism costs in 2011 and adjusted them to reflect growth in employment costs to 2015 dollars. Finally, we estimated absenteeism costs for a hypothetical small employer (100 employees) and a hypothetical large employer (1,000 employees). Absenteeism estimates ranged from 1 to 2 days per individual per year depending on the risk factor or chronic disease. Except for the physical inactivity and obesity estimates, disease- and risk-factor-specific estimates were similar in MEPS and MarketScan. Absenteeism increased with the number of risk factors or diseases reported. Nationally, each risk factor or disease was associated with annual absenteeism costs greater than $2 billion. Absenteeism costs ranged from $16 to $81 (small employer) and $17 to $286 (large employer) per employee per year. Absenteeism costs associated with chronic diseases and health risk factors can be substantial. Employers may incur these costs through lower productivity, and employees could incur costs through lower wages.
McLaren, Lindsay; McNeil, Deborah A; Potestio, Melissa; Patterson, Steve; Thawer, Salima; Faris, Peter; Shi, Congshi; Shwart, Luke
2016-02-11
One of the main arguments made in favor of community water fluoridation is that it is equitable in its impact on dental caries (i.e., helps to offset inequities in dental caries). Although an equitable effect of fluoridation has been demonstrated in cross-sectional studies, it has not been studied in the context of cessation of community water fluoridation (CWF). The objective of this study was to compare the socio-economic patterns of children's dental caries (tooth decay) in Calgary, Canada, in 2009/10 when CWF was in place, and in 2013/14, after it had been discontinued. We analyzed data from population-based samples of schoolchildren (grade 2) in 2009/10 and 2013/14. Data on dental caries (decayed, missing, and filled primary and permanent teeth) were gathered via open mouth exams conducted in schools by registered dental hygienists. We examined the association between dental caries and 1) presence/absence of dental insurance and 2) small area index of material deprivation, using Poisson (zero-inflated) and logistic regression, for both time points separately. For small-area material deprivation at each time point, we also computed the concentration index of inequality for each outcome variable. Statistically significant inequities by dental insurance status and by small area material deprivation were more apparent in 2013/14 than in 2009/10. Results are consistent with increasing inequities in dental caries following cessation of CWF. However, further research is needed to 1) confirm the effects in a study that includes a comparison community, and 2) explore possible alternative reasons for the findings, including changes in treatment and preventive programming.
Sulz, Michael C; Siebert, Uwe; Arvandi, Marjan; Gothe, Raffaella M; Wurm, Johannes; von Känel, Roland; Vavricka, Stephan R; Meyenberger, Christa; Sagmeister, Markus
2013-07-01
Patients with inflammatory bowel disease (IBD) have a high resource consumption, with considerable costs for the healthcare system. In a system with sparse resources, treatment is influenced not only by clinical judgement but also by resource consumption. We aimed to determine the resource consumption of IBD patients and to identify its significant predictors. Data from the prospective Swiss Inflammatory Bowel Disease Cohort Study were analysed for the resource consumption endpoints hospitalization and outpatient consultations at enrolment [1187 patients; 41.1% ulcerative colitis (UC), 58.9% Crohn's disease (CD)] and at 1-year follow-up (794 patients). Predictors of interest were chosen through an expert panel and a review of the relevant literature. Logistic regressions were used for binary endpoints, and negative binomial regressions and zero-inflated Poisson regressions were used for count data. For CD, fistula, use of biologics and disease activity were significant predictors for hospitalization days (all P-values <0.001); age, sex, steroid therapy and biologics were significant predictors for the number of outpatient visits (P=0.0368, 0.023, 0.0002, 0.0003, respectively). For UC, biologics, C-reactive protein, smoke quitters, age and sex were significantly predictive for hospitalization days (P=0.0167, 0.0003, 0.0003, 0.0076 and 0.0175 respectively); disease activity and immunosuppressive therapy predicted the number of outpatient visits (P=0.0009 and 0.0017, respectively). The results of multivariate regressions are shown in detail. Several highly significant clinical predictors for resource consumption in IBD were identified that might be considered in medical decision-making. In terms of resource consumption and its predictors, CD and UC show a different behaviour.
Siebert, Uwe; Wurm, Johannes; Gothe, Raffaella Matteucci; Arvandi, Marjan; Vavricka, Stephan R; von Känel, Roland; Begré, Stefan; Sulz, Michael C; Meyenberger, Christa; Sagmeister, Markus
2013-01-01
Inflammatory bowel disease can decrease the quality of life and induce work disability. We sought to (1) identify and quantify the predictors of disease-specific work disability in patients with inflammatory bowel disease and (2) assess the suitability of using cross-sectional data to predict future outcomes, using the Swiss Inflammatory Bowel Disease Cohort Study data. A total of 1187 patients were enrolled and followed up for an average of 13 months. Predictors included patient and disease characteristics and drug utilization. Potential predictors were identified through an expert panel and published literature. We estimated adjusted effect estimates with 95% confidence intervals using logistic and zero-inflated Poisson regression. Overall, 699 (58.9%) experienced Crohn's disease and 488 (41.1%) had ulcerative colitis. Most important predictors for temporary work disability in patients with Crohn's disease included gender, disease duration, disease activity, C-reactive protein level, smoking, depressive symptoms, fistulas, extraintestinal manifestations, and the use of immunosuppressants/steroids. Temporary work disability in patients with ulcerative colitis was associated with age, disease duration, disease activity, and the use of steroids/antibiotics. In all patients, disease activity emerged as the only predictor of permanent work disability. Comparing data at enrollment versus follow-up yielded substantial differences regarding disability and predictors, with follow-up data showing greater predictor effects. We identified predictors of work disability in patients with Crohn's disease and ulcerative colitis. Our findings can help in forecasting these disease courses and guide the choice of appropriate measures to prevent adverse outcomes. Comparing cross-sectional and longitudinal data showed that the conduction of cohort studies is inevitable for the examination of disability.
Ariansen, Anja M S
2014-01-01
Objective Western women increasingly delay having children to advance their career, and pregnancy is considered to be riskier among older women. In Norway, this development surprisingly coincides with increased sickness absence among young pregnant women, rather than their older counterparts. This paper tests the hypothesis that young pregnant women have a higher number of sick days because this age group includes a higher proportion of working class women, who are more prone to sickness absence. Design A zero-inflated Poisson regression was conducted on the Norwegian population registry. Participants All pregnant employees giving birth in 2004–2008 were included in the study. A total number of 216 541 pregnancies were observed among 180 483 women. Outcome measure Number of sick days. Results Although the association between age and number of sick days was U-shaped, pregnant women in their early 20s had a higher number of sick days than those in their mid-40s. This was particularly the case for pregnant women with previous births. In this group, 20-year-olds had 12.6 more sick days than 45-year-olds; this age difference was reduced to 6.3 after control for class. Among women undergoing their first pregnancy, 20-year-olds initially had 1.2 more sick days than 45-year-olds, but control for class altered this age difference. After control for class, 45-year-old first-time pregnant women had 2.9 more sick days than 20-year-olds with corresponding characteristics. Conclusions The negative association between age and sickness absence was partly due to younger age groups including more working class women, who were more prone to sickness absence. Young pregnant women's needs for job adjustments should not be underestimated. PMID:24793246
Roy, Kakoli; Lang, Jason E.; Payne, Rebecca L.; Howard, David H.
2016-01-01
Introduction Employers may incur costs related to absenteeism among employees who have chronic diseases or unhealthy behaviors. We examined the association between employee absenteeism and 5 conditions: 3 risk factors (smoking, physical inactivity, and obesity) and 2 chronic diseases (hypertension and diabetes). Methods We identified 5 chronic diseases or risk factors from 2 data sources: MarketScan Health Risk Assessment and the Medical Expenditure Panel Survey (MEPS). Absenteeism was measured as the number of workdays missed because of sickness or injury. We used zero-inflated Poisson regression to estimate excess absenteeism as the difference in the number of days missed from work by those who reported having a risk factor or chronic disease and those who did not. Covariates included demographics (eg, age, education, sex) and employment variables (eg, industry, union membership). We quantified absenteeism costs in 2011 and adjusted them to reflect growth in employment costs to 2015 dollars. Finally, we estimated absenteeism costs for a hypothetical small employer (100 employees) and a hypothetical large employer (1,000 employees). Results Absenteeism estimates ranged from 1 to 2 days per individual per year depending on the risk factor or chronic disease. Except for the physical inactivity and obesity estimates, disease- and risk-factor–specific estimates were similar in MEPS and MarketScan. Absenteeism increased with the number of risk factors or diseases reported. Nationally, each risk factor or disease was associated with annual absenteeism costs greater than $2 billion. Absenteeism costs ranged from $16 to $81 (small employer) and $17 to $286 (large employer) per employee per year. Conclusion Absenteeism costs associated with chronic diseases and health risk factors can be substantial. Employers may incur these costs through lower productivity, and employees could incur costs through lower wages. PMID:27710764
Miller-Archie, Sara A; Jordan, Hannah T; Alper, Howard; Wisnivesky, Juan P; Cone, James E; Friedman, Stephen M; Brackbill, Robert M
2018-04-01
We described the patterns of asthma hospitalization among persons exposed to the 2001 World Trade Center (WTC) attacks, and assessed whether 9/11-related exposures or comorbidities, including posttraumatic stress disorder (PTSD) and gastroesophageal reflux symptoms (GERS), were associated with an increased rate of hospitalization. Data for adult enrollees in the WTC Health Registry, a prospective cohort study, with self-reported physician-diagnosed asthma who resided in New York State on 9/11 were linked to administrative hospitalization data to identify asthma hospitalizations during September 11, 2001-December 31, 2010. Multivariable zero-inflated Poisson regression was used to examine associations among 9/11 exposures, comorbid conditions, and asthma hospitalizations. Of 11 471 enrollees with asthma, 406 (3.5%) had ≥1 asthma hospitalization during the study period (721 total hospitalizations). Among enrollees diagnosed before 9/11 (n = 6319), those with PTSD or GERS had over twice the rate of hospitalization (adjusted rate ratio (ARR) = 2.5, 95% CI = 1.4-4.1; ARR = 2.1, 95% CI = 1.3-3.2, respectively) compared to those without. This association was not statistically significant in enrollees diagnosed after 9/11. Compared to higher educational attainment, completing less than college was associated with an increased hospitalization rate among participants with both pre-9/11- and post-9/11-onset asthma (ARR = 1.9, 95% CI = 1.2-2.9; ARR = 2.6, 95% CI = 1.6-4.1, respectively). Sinus symptoms, exposure to the dust cloud, and having been a WTC responder were not associated with asthma hospitalization. Among enrollees with pre-9/11 asthma, comorbid PTSD and GERS were associated with an increase in asthma hospitalizations. Management of these comorbidities may be an important factor in preventing hospitalization.
Fibre inflation and α-attractors
NASA Astrophysics Data System (ADS)
Kallosh, Renata; Linde, Andrei; Roest, Diederik; Westphal, Alexander; Yamada, Yusuke
2018-02-01
Fibre inflation is a specific string theory construction based on the Large Volume Scenario that produces an inflationary plateau. We outline its relation to α-attractor models for inflation, with the cosmological sector originating from certain string theory corrections leading to α = 2 and α = 1/2. Above a certain field range, the steepening effect of higher-order corrections leads first to the breakdown of single-field slow-roll and after that to the onset of 2-field dynamics: the overall volume of the extra dimensions starts to participate in the effective dynamics. Finally, we propose effective supergravity models of fibre inflation based on an \\overline{D3} uplift term with a nilpotent superfield. Specific moduli dependent \\overline{D3} induced geometries lead to cosmological fibre models but have in addition a de Sitter minimum exit. These supergravity models motivated by fibre inflation are relatively simple, stabilize the axions and disentangle the Hubble parameter from supersymmetry breaking.
Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.
Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng
2018-06-01
The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Schupp, Peter; Vysoký, Jan
2014-06-01
We generalize noncommutative gauge theory using Nambu-Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg-Witten map. We construct a covariant Nambu-Poisson gauge theory action, give its first order expansion in the Nambu-Poisson tensor and relate it to a Nambu-Poisson matrix model.
Intertime jump statistics of state-dependent Poisson processes.
Daly, Edoardo; Porporato, Amilcare
2007-01-01
A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.
Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward
2013-09-01
Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.
QCAD simulation and optimization of semiconductor double quantum dots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Erik; Gao, Xujiao; Kalashnikova, Irina
2013-12-01
We present the Quantum Computer Aided Design (QCAD) simulator that targets modeling quantum devices, particularly silicon double quantum dots (DQDs) developed for quantum qubits. The simulator has three di erentiating features: (i) its core contains nonlinear Poisson, e ective mass Schrodinger, and Con guration Interaction solvers that have massively parallel capability for high simulation throughput, and can be run individually or combined self-consistently for 1D/2D/3D quantum devices; (ii) the core solvers show superior convergence even at near-zero-Kelvin temperatures, which is critical for modeling quantum computing devices; (iii) it couples with an optimization engine Dakota that enables optimization of gate voltagesmore » in DQDs for multiple desired targets. The Poisson solver includes Maxwell- Boltzmann and Fermi-Dirac statistics, supports Dirichlet, Neumann, interface charge, and Robin boundary conditions, and includes the e ect of dopant incomplete ionization. The solver has shown robust nonlinear convergence even in the milli-Kelvin temperature range, and has been extensively used to quickly obtain the semiclassical electrostatic potential in DQD devices. The self-consistent Schrodinger-Poisson solver has achieved robust and monotonic convergence behavior for 1D/2D/3D quantum devices at very low temperatures by using a predictor-correct iteration scheme. The QCAD simulator enables the calculation of dot-to-gate capacitances, and comparison with experiment and between solvers. It is observed that computed capacitances are in the right ballpark when compared to experiment, and quantum con nement increases capacitance when the number of electrons is xed in a quantum dot. In addition, the coupling of QCAD with Dakota allows to rapidly identify which device layouts are more likely leading to few-electron quantum dots. Very efficient QCAD simulations on a large number of fabricated and proposed Si DQDs have made it possible to provide fast feedback for design comparison and optimization.« less
The Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.
ERIC Educational Resources Information Center
Everett, James E.
1993-01-01
Addresses objections to the validity of assuming a Poisson loglinear model as the generating process for citations from one journal into another. Fluctuations in citation rate, serial dependence on citations, impossibility of distinguishing between rate changes and serial dependence, evidence for changes in Poisson rate, and transitivity…
Beyond Inflation:. A Cyclic Universe Scenario
NASA Astrophysics Data System (ADS)
Turok, Neil; Steinhardt, Paul J.
2005-08-01
Inflation has been the leading early universe scenario for two decades, and has become an accepted element of the successful 'cosmic concordance' model. However, there are many puzzling features of the resulting theory. It requires both high energy and low energy inflation, with energy densities differing by a hundred orders of magnitude. The questions of why the universe started out undergoing high energy inflation, and why it will end up in low energy inflation, are unanswered. Rather than resort to anthropic arguments, we have developed an alternative cosmology, the cyclic universe [1], in which the universe exists in a very long-lived attractor state determined by the laws of physics. The model shares inflation's phenomenological successes without requiring an epoch of high energy inflation. Instead, the universe is made homogeneous and flat, and scale-invariant adiabatic perturbations are generated during an epoch of low energy acceleration like that seen today, but preceding the last big bang. Unlike inflation, the model requires low energy acceleration in order for a periodic attractor state to exist. The key challenge facing the scenario is that of passing through the cosmic singularity at t = 0. Substantial progress has been made at the level of linearised gravity, which is reviewed here. The challenge of extending this to nonlinear gravity and string theory remains.
Beyond Inflation: A Cyclic Universe Scenario
NASA Astrophysics Data System (ADS)
Turok, Neil; Steinhardt, Paul J.
2005-01-01
Inflation has been the leading early universe scenario for two decades, and has become an accepted element of the successful `cosmic concordance' model. However, there are many puzzling features of the resulting theory. It requires both high energy and low energy inflation, with energy densities differing by a hundred orders of magnitude. The questions of why the universe started out undergoing high energy inflation, and why it will end up in low energy inflation, are unanswered. Rather than resort to anthropic arguments, we have developed an alternative cosmology, the cyclic universe, in which the universe exists in a very long-lived attractor state determined by the laws of physics. The model shares inflation's phenomenological successes without requiring an epoch of high energy inflation. Instead, the universe is made homogeneous and flat, and scale-invariant adiabatic perturbations are generated during an epoch of low energy acceleration like that seen today, but preceding the last big bang. Unlike inflation, the model requires low energy acceleration in order for a periodic attractor state to exist. The key challenge facing the scenario is that of passing through the cosmic singularity at t = 0. Substantial progress has been made at the level of linearised gravity, which is reviewed here. The challenge of extending this to nonlinear gravity and string theory remains.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of the inflation mechanism approved for use on the PFD. (2) [Reserved] (e) Inflation mechanisms. Each manual, automatic, or manual-auto inflation mechanism must be permanently marked with its unique model...
Kecojevic, Aleksandar; Silva, Karol; Sell, Randall; Lankenau, Stephen E.
2014-01-01
This study examined the relationship between prescription drug misuse and sexual risk behaviors (i.e. unprotected sex, increased number of sex partners) in a sample of young men who have sex with men (YMSM) in Philadelphia. Data come from a cross-sectional study of 18-29 year old YMSM (N=191) who misused prescription drugs in the past 6 months. Associations were investigated in two regression models: logistic models for unprotected anal intercourse (UAI) and zero-truncated Poisson regression model for number of sex partners. Of 177 participants engaging in anal intercourse in the past 6 months, 57.6% engaged in UAI. After adjusting for socio-demographic variables and illicit drug use, misuse of prescription pain pills and muscle relaxants remained significantly associated with engaging in receptive UAI. No prescription drug class was associated with a high number of sex partners. This study provides additional evidence that some prescription drugs are associated with sexual risk behaviors among YMSM. PMID:25240627
Kecojevic, Aleksandar; Silva, Karol; Sell, Randall L; Lankenau, Stephen E
2015-05-01
This study examined the relationship between prescription drug misuse and sexual risk behaviors (i.e. unprotected sex, increased number of sex partners) in a sample of young men who have sex with men (YMSM) in Philadelphia. Data come from a cross-sectional study of 18-29 year old YMSM (N = 191) who misused prescription drugs in the past 6 months. Associations were investigated in two regression models: logistic models for unprotected anal intercourse (UAI) and zero-truncated Poisson regression model for number of sex partners. Of 177 participants engaging in anal intercourse in the past 6 months, 57.6 % engaged in UAI. After adjusting for socio-demographic variables and illicit drug use, misuse of prescription pain pills and muscle relaxants remained significantly associated with engaging in receptive UAI. No prescription drug class was associated with a high number of sex partners. This study provides additional evidence that some prescription drugs are associated with sexual risk behaviors among YMSM.
Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C
2011-12-01
Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Cappell, M S; Spray, D C; Bennett, M V
1988-06-28
Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.
Khazraee, S Hadi; Johnson, Valen; Lord, Dominique
2018-08-01
The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients). Copyright © 2018. Published by Elsevier Ltd.
Primordial anisotropies in gauged hybrid inflation
NASA Astrophysics Data System (ADS)
Akbar Abolhasani, Ali; Emami, Razieh; Firouzjahi, Hassan
2014-05-01
We study primordial anisotropies generated in the model of gauged hybrid inflation in which the complex waterfall field is charged under a U(1)gauge field. Primordial anisotropies are generated either actively during inflation or from inhomogeneities modulating the surface of end of inflation during waterfall transition. We present a consistent δN mechanism to calculate the anisotropic power spectrum and bispectrum. We show that the primordial anisotropies generated at the surface of end of inflation do not depend on the number of e-folds and therefore do not produce dangerously large anisotropies associated with the IR modes. Furthermore, one can find the parameter space that the anisotropies generated from the surface of end of inflation cancel the anisotropies generated during inflation, therefore relaxing the constrains on model parameters imposed from IR anisotropies. We also show that the gauge field fluctuations induce a red-tilted power spectrum so the averaged power spectrum from the gauge field can change the total power spectrum from blue to red. Therefore, hybrid inflation, once gauged under a U(1) field, can be consistent with the cosmological observations.
Evading the Lyth bound in hybrid natural inflation
NASA Astrophysics Data System (ADS)
Hebecker, A.; Kraus, S. C.; Westphal, A.
2013-12-01
Generically, the gravitational-wave or tensor-mode contribution to the primordial curvature spectrum of inflation is tiny if the field range of the inflaton is much smaller than the Planck scale. We show that this pessimistic conclusion is naturally avoided in a rather broad class of small-field models. More specifically, we consider models where an axionlike shift symmetry keeps the inflaton potential flat (up to nonperturbative cosine-shaped modulations), but inflation nevertheless ends in a waterfall regime, as is typical for hybrid inflation. In such hybrid natural inflation scenarios (examples are provided by Wilson line inflation and fluxbrane inflation), the slow-roll parameter ɛ can be sizable during an early period (relevant for the cosmic microwave background spectrum). Subsequently, ɛ quickly becomes very small before the tachyonic instability eventually terminates the slow-roll regime. In this scenario, one naturally generates a considerable tensor-mode contribution in the curvature spectrum, collecting nevertheless the required amount of e-foldings during the final period of inflation. While nonobservation of tensors by Planck is certainly not a problem, a discovery in the medium- to long-term future is realistic.
Modeling Psychological Contract Violation using Dual Regime Models: An Event-based Approach.
Hofmans, Joeri
2017-01-01
A good understanding of the dynamics of psychological contract violation requires theories, research methods and statistical models that explicitly recognize that violation feelings follow from an event that violates one's acceptance limits, after which interpretative processes are set into motion, determining the intensity of these violation feelings. Whereas theories-in the form of the dynamic model of the psychological contract-and research methods-in the form of daily diary research and experience sampling research-are available by now, the statistical tools to model such a two-stage process are still lacking. The aim of the present paper is to fill this gap in the literature by introducing two statistical models-the Zero-Inflated model and the Hurdle model-that closely mimic the theoretical process underlying the elicitation violation feelings via two model components: a binary distribution that models whether violation has occurred or not, and a count distribution that models how severe the negative impact is. Moreover, covariates can be included for both model components separately, which yields insight into their unique and shared antecedents. By doing this, the present paper offers a methodological-substantive synergy, showing how sophisticated methodology can be used to examine an important substantive issue.
Curvature perturbation and waterfall dynamics in hybrid inflation
NASA Astrophysics Data System (ADS)
Akbar Abolhasani, Ali; Firouzjahi, Hassan; Sasaki, Misao
2011-10-01
We investigate the parameter spaces of hybrid inflation model with special attention paid to the dynamics of waterfall field and curvature perturbations induced from its quantum fluctuations. Depending on the inflaton field value at the time of phase transition and the sharpness of the phase transition inflation can have multiple extended stages. We find that for models with mild phase transition the induced curvature perturbation from the waterfall field is too large to satisfy the COBE normalization. We investigate the model parameter space where the curvature perturbations from the waterfall quantum fluctuations vary between the results of standard hybrid inflation and the results obtained here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yi; Xue, Wei, E-mail: yw366@cam.ac.uk, E-mail: wei.xue@sissa.it
We study the tilt of the primordial gravitational waves spectrum. A hint of blue tilt is shown from analyzing the BICEP2 and POLARBEAR data. Motivated by this, we explore the possibilities of blue tensor spectra from the very early universe cosmology models, including null energy condition violating inflation, inflation with general initial conditions, and string gas cosmology, etc. For the simplest G-inflation, blue tensor spectrum also implies blue scalar spectrum. In general, the inflation models with blue tensor spectra indicate large non-Gaussianities. On the other hand, string gas cosmology predicts blue tensor spectrum with highly Gaussian fluctuations. If further experimentsmore » do confirm the blue tensor spectrum, non-Gaussianity becomes a distinguishing test between inflation and alternatives.« less
Modeling of Aerobrake Ballute Stagnation Point Temperature and Heat Transfer to Inflation Gas
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
2012-01-01
A trailing Ballute drag device concept for spacecraft aerocapture is considered. A thermal model for calculation of the Ballute membrane temperature and the inflation gas temperature is developed. An algorithm capturing the most salient features of the concept is implemented. In conjunction with the thermal model, trajectory calculations for two candidate missions, Titan Explorer and Neptune Orbiter missions, are used to estimate the stagnation point temperature and the inflation gas temperature. Radiation from both sides of the membrane at the stagnation point and conduction to the inflating gas is included. The results showed that the radiation from the membrane and to a much lesser extent conduction to the inflating gas, are likely to be the controlling heat transfer mechanisms and that the increase in gas temperature due to aerodynamic heating is of secondary importance.
Self-unitarization of New Higgs Inflation and compatibility with Planck and BICEP2 data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Germani, Cristiano; Wintergerst, Nico; Watanabe, Yuki, E-mail: cristiano.germani@lmu.de, E-mail: watanabe@resceu.s.u-tokyo.ac.jp, E-mail: nico.wintergerst@physik.lmu.de
2014-12-01
In this paper we show that the Germani-Kehagias model of Higgs inflation (or New Higgs Inflation), where the Higgs boson is kinetically non-minimally coupled to the Einstein tensor is in perfect compatibility with the latest Planck and BICEP2 data. Moreover, we show that the tension between the Planck and BICEP2 data can be relieved within the New Higgs inflation scenario by a negative running of the spectral index. Regarding the unitarity of the model, we argue that it is unitary throughout the evolution of the Universe. Weak couplings in the Higgs-Higgs and Higgs-graviton sectors are provided by a large backgroundmore » dependent cut-off scale during inflation. In the same regime, the W and Z gauge bosons acquire a very large mass, thus decouple. On the other hand, if they are also non-minimally coupled to the Higgs boson, their effective masses can be enormously reduced. In this case, the W and Z bosons are no longer decoupled. After inflation, the New Higgs model is well approximated by a quartic Galileon with a renormalizable potential. We argue that this can unitarily create the right conditions for inflation to eventually start.« less
Structural Modeling of a Five-Meter Thin Film Inflatable Antenna/Concentrator
NASA Technical Reports Server (NTRS)
Smalley, Kurt B.; Tinker, Michael L.; Taylor, W. Scott; Brunty, Joseph A. (Technical Monitor)
2002-01-01
Inflatable structures have been the subject of renewed interest in recent years for space applications such as communications antennas, solar thermal propulsion, and space solar power. A major advantage of using inflatable structures in space is their extremely light weight. An obvious second advantage is on-orbit deployability and related space savings in the launch configuration. A recent technology demonstrator flight for inflatable structures was the Inflatable Antenna Experiment (IAE) that was deployed on orbit from the Shuttle Orbiter. Although difficulty was encountered in the inflation/deployment phase, the flight was successful overall and provided valuable experience in the use of such structures. Several papers on static structural analysis of inflated cylinders have been written, describing different techniques such as linear shell theory, and nonlinear and variational methods, but very little work had been done in dynamics of inflatable structures until recent years. In 1988 Leonard indicated that elastic beam bending modes could be utilized in approximating lower-order frequencies of inflatable beams. Main, et al. wrote a very significant 1995 paper describing results of modal tests of inflated cantilever beams and the determination of effective material properties. Changes in material properties for different pressures were also discussed, and the beam model was used in a more complex structure. The paper demonstrated that conventional finite element analysis packages could be very useful in the analysis of complex inflatable structures. The purposes of this paper are to discuss the methodology for dynamically characterizing a large 5-meter thin film inflatable reflector, and to discuss the test arrangement and results. Nonlinear finite element modal results are compared to modal test data. The work is significant and of considerable interest to researchers because of 1) the large size of the structure, making it useful for scaling studies, and 2) application of commercially available finite element software for modeling pressurized thin-film structures.
Topological defects in extended inflation
NASA Technical Reports Server (NTRS)
Copeland, Edmund J.; Kolb, Edward W.; Liddle, Andrew R.
1990-01-01
The production of topological defects, especially cosmic strings, in extended inflation models was considered. In extended inflation, the Universe passes through a first-order phase transition via bubble percolation, which naturally allows defects to form at the end of inflation. The correlation length, which determines the number density of the defects, is related to the mean size of bubbles when they collide. This mechanism allows a natural combination of inflation and large scale structure via cosmic strings.
Auxetics in smart systems and structures 2013
NASA Astrophysics Data System (ADS)
Scarpa, Fabrizio; Ruzzene, Massimo; Alderson, Andrew; Wojciechowski, Krzysztof W.
2013-08-01
Auxetics comes from the Greek (auxetikos), meaning 'that which tends to expand'. The term indicates specifically materials and structures with negative Poisson's ratio (NPR). Although the Poisson's ratio is a mechanical property, auxetic solids have shown evidence of multifunctional characteristics, ranging from increased stiffness and indentation resistance, to energy absorption under static and dynamic loading, soundproofing qualities and dielectric tangent loss. NPR solids and structures have also been used in the past as material platforms to build smart structural systems. Auxetics in general can be considered also a part of the 'negative materials' field, which includes solids and structures exhibiting negative thermal expansion, negative stiffness and compressibility. All these unusual deformation characteristics have the potential to provide a significant contribution to the area of smart materials systems and structures. In this focus issue, we are pleased to present some examples of novel multifunctional behaviors provided by auxetic, negative stiffness and negative compressibility in smart systems and structures. Particular emphasis has been placed upon the multidisciplinary and systems approach provided by auxetics and negative materials, also with examples applied to energy absorption, vibration damping, structural health monitoring and active deployment aspects. Three papers in this focus issue provide significant new clarifications on the role of auxeticity in the mechanical behavior of shear deformation in plates (Lim), stress wave characteristics (Lim again), and thermoelastic damping (Maruszewski et al ). Kochmann and Venturini describe the performance of auxetic composites in finite strain elasticity. New types of microstructures for auxetic systems are depicted for the first time in three works by Ge et al , Zhang et al , and Kim and co-workers. Tubular auxetic structures and their mechanical performance are also analyzed by Karnessis and Burriesci. Foams with negative Poisson's ratio constitute one of the main examples of auxetic materials available. The focus issue presents two papers on this topic, one on a novel microstructure numerical modeling technique (Pozniak et al ), the other on experimental and model identification results of linear and nonlinear vibration behavior (Bianchi and Scarpa). Nonlinearity (now in wave propagation for SHM applications) is also investigated by Klepka and co-workers, this time in auxetic chiral sandwich structures. Vibration damping and nonlinear behavior is also a key feature of the auxetic structural damper with metal rubber particles proposed by Ma et al . Papers on negative material properties are introduced by the negative stiffness and high-frequency damper concept proposed by Kalathur and Lakes. A cellular structure exhibiting a zero Poisson's ratio, together with zero and negative stiffness, is presented in the work of Virk and co-workers. Negative compressibility is examined by Grima et al in truss-type structures with constrained angle stretching. Finally, Grima and co-workers propose a concept of tunable auxetic metamaterial with magnetic inclusions for multifunctional applications. Acknowledgments We would like to thank all the authors for their high quality contributions. Special thanks go also to the Smart Materials and Structures Editorial Board and the IOP Publishing team, with particular mention to Natasha Leeper and Bethan Davies for their continued support in arranging this focus issue in Smart Materials and Structures .
Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des
2007-09-01
Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.
Lincoln, Don
2018-01-16
In 1964, scientists discovered a faint radio hiss coming from the heavens and realized that the hiss wasnât just noise. It was a message from eons ago; specifically the remnants of the primordial fireball, cooled to about 3 degrees above absolute zero. Subsequent research revealed that the radio hiss was the same in every direction. The temperature of the early universe was uniform to at better than a part in a hundred thousand. And this was weird. According to the prevailing theory, the two sides of the universe have never been in contact. So how could two places that had never been in contact be so similar? One possible explanation was proposed in 1979. Called inflation, the theory required that early in the history of the universe, the universe expanded faster than the speed of light. Confused? Watch this video as Fermilabâs Dr. Don Lincoln makes sense of this mind-bending idea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Don
In 1964, scientists discovered a faint radio hiss coming from the heavens and realized that the hiss wasn’t just noise. It was a message from eons ago; specifically the remnants of the primordial fireball, cooled to about 3 degrees above absolute zero. Subsequent research revealed that the radio hiss was the same in every direction. The temperature of the early universe was uniform to at better than a part in a hundred thousand. And this was weird. According to the prevailing theory, the two sides of the universe have never been in contact. So how could two places that hadmore » never been in contact be so similar? One possible explanation was proposed in 1979. Called inflation, the theory required that early in the history of the universe, the universe expanded faster than the speed of light. Confused? Watch this video as Fermilab’s Dr. Don Lincoln makes sense of this mind-bending idea.« less
Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan
2011-11-01
To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.
NASA Astrophysics Data System (ADS)
Maleknejad, A.; Sheikh-Jabbari, M. M.; Soda, J.
2013-07-01
The isotropy and homogeneity of the cosmic microwave background (CMB) favors “scalar driven” early Universe inflationary models. However, gauge fields and other non-scalar fields are far more common at all energy scales, in particular at high energies seemingly relevant to inflation models. Hence, in this review we consider the role and consequences, theoretical and observational, that gauge fields can have during the inflationary era. Gauge fields may be turned on in the background during inflation, or may become relevant at the level of cosmic perturbations. There have been two main classes of models with gauge fields in the background, models which show violation of the cosmic no-hair theorem and those which lead to isotropic FLRW cosmology, respecting the cosmic no-hair theorem. Models in which gauge fields are only turned on at the cosmic perturbation level, may source primordial magnetic fields. We also review specific observational features of these models on the CMB and/or the primordial cosmic magnetic fields. Our discussions will be mainly focused on the inflation period, with only a brief discussion on the post inflationary (p)reheating era. Large field models: The initial value of the inflaton field is large, generically super-Planckian, and it rolls slowly down toward the potential minimum at smaller φ values. For instance, chaotic inflation is one of the representative models of this class. The typical potential of large-field models has a monomial form as V(φ)=V0φn. A simple analysis using the dynamical equations reveals that for number of e-folds Ne larger than 60, we require super-Planckian initial field values,5φ0>3M. For these models typically ɛ˜η˜Ne-1. Small field models: Inflaton field is initially small and slowly evolves toward the potential minimum at larger φ values. The small field models are characterized by the following potential V(φ)=V0(1-(), which corresponds to a Taylor expansion about the origin, but more realistic small field models also have a potential minimum at φ≠0 which the system falls in at the end of inflation. A typical property of small field models is that a sufficient number of e-folds, requires a sub-Planckian inflaton initial value. For this reason they are called small field models. Natural inflation is an example of this type [12]. Hybrid inflation models: These models involve more than one scalar field while inflation is mainly driven by a single inflaton field ϕ. Inflaton starts from a large value rolling down until it reaches a bifurcation point, ϕ=ϕe, after which the field becomes unstable and undergoes a waterfall transition toward its global minimum. Its prime example is the Linde’s hybrid inflation model with the following potential [13] V(ϕ,χ)={λ}/{4}(+{1}/{2}g2ϕ2χ2+{1}/{2}m2ϕ2. During the initial inflationary phase the potential of the hybrid inflation is effectively described by a single field ϕ while inflation ends by a phase transition triggered by the presence of the second scalar field, the waterfall field χ. In other words, when the effective mass squared of a waterfall field becomes negative, the tachyonic instability makes waterfall field roll down toward the true vacuum state and the inflation suddenly ends.Number of e-folds Ne is given as Ne≃{M4}/{4λm2}ln({ϕ0}/{ϕe}), where ϕe={M}/{g} is the critical value of the inflaton below which, due to tachyonic instability, χ=0 becomes unstable and mχ2 gets negative. K-inflation: This is the prime example of models with non-canonical Kinetic term we discuss here. They are described by the action [14] S=∫d4x√{-g}({R}/{2}+P(φ,X)), where φ is a scalar field and X≔-{1}/{2}(. Here, P plays the rule of the effective pressure, while the energy density is given by ρ=2XP-P. Thus, the slow-roll parameter is given as ɛ={3XP}/{2XP-P}. The characteristic feature of these models is that in general they have a non-trivial sound speed cs2 for the propagation of perturbations (cf. our discussion in Section 2.2) cs2≡{P}/{P+2XP}. Finding K-inflation actions P(φ,X) which are well-motivated and consistently embedded in high-energy theories is the main challenge of this class of models [9]. Nonetheless, DBI inflation is a special kind of K-inflation, which is well-motivated from string theory with the action [15] S=∫d4x√{-g}[{R}/{2}-{1}/{f(φ)}((√{D}-1)+V(φI))], where D=1-2f(φ)X. In the presence of another natural cutoff Λ in the model, smallness or largeness of the inflaton field should be compared to Λ; Λ could be sub-Planckian and in general Λ≲M. For a discussion on this see [10,11].
An Analysis of the Number of Medical Malpractice Claims and Their Amounts
Bonetti, Marco; Cirillo, Pasquale; Musile Tanzi, Paola; Trinchero, Elisabetta
2016-01-01
Starting from an extensive database, pooling 9 years of data from the top three insurance brokers in Italy, and containing 38125 reported claims due to alleged cases of medical malpractice, we use an inhomogeneous Poisson process to model the number of medical malpractice claims in Italy. The intensity of the process is allowed to vary over time, and it depends on a set of covariates, like the size of the hospital, the medical department and the complexity of the medical operations performed. We choose the combination medical department by hospital as the unit of analysis. Together with the number of claims, we also model the associated amounts paid by insurance companies, using a two-stage regression model. In particular, we use logistic regression for the probability that a claim is closed with a zero payment, whereas, conditionally on the fact that an amount is strictly positive, we make use of lognormal regression to model it as a function of several covariates. The model produces estimates and forecasts that are relevant to both insurance companies and hospitals, for quality assurance, service improvement and cost reduction. PMID:27077661
Natural inflation and quantum gravity.
de la Fuente, Anton; Saraswat, Prashant; Sundrum, Raman
2015-04-17
Cosmic inflation provides an attractive framework for understanding the early Universe and the cosmic microwave background. It can readily involve energies close to the scale at which quantum gravity effects become important. General considerations of black hole quantum mechanics suggest nontrivial constraints on any effective field theory model of inflation that emerges as a low-energy limit of quantum gravity, in particular, the constraint of the weak gravity conjecture. We show that higher-dimensional gauge and gravitational dynamics can elegantly satisfy these constraints and lead to a viable, theoretically controlled and predictive class of natural inflation models.
Accidental Kähler moduli inflation
NASA Astrophysics Data System (ADS)
Maharana, Anshuman; Rummel, Markus; Sumitomo, Yoske
2015-09-01
We study a model of accidental inflation in type IIB string theory where inflation occurs near the inflection point of a small Kähler modulus. A racetrack structure helps to alleviate the known concern that string-loop corrections may spoil Kähler Moduli Inflation unless having a significant suppression via the string coupling or a special brane setup. Also, the hierarchy of gauge group ranks required for the separation between moduli stabilization and inflationary dynamics is relaxed. The relaxation becomes more significant when we use the recently proposed D-term generated racetrack model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furuuchi, Kazuyuki; Sperling, Marcus, E-mail: kazuyuki.furuuchi@manipal.edu, E-mail: marcus.sperling@univie.ac.at
2017-05-01
We study quantum tunnelling in Dante's Inferno model of large field inflation. Such a tunnelling process, which will terminate inflation, becomes problematic if the tunnelling rate is rapid compared to the Hubble time scale at the time of inflation. Consequently, we constrain the parameter space of Dante's Inferno model by demanding a suppressed tunnelling rate during inflation. The constraints are derived and explicit numerical bounds are provided for representative examples. Our considerations are at the level of an effective field theory; hence, the presented constraints have to hold regardless of any UV completion.
Reheating-volume measure for random-walk inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winitzki, Sergei; Yukawa Institute of Theoretical Physics, Kyoto University, Kyoto
2008-09-15
The recently proposed 'reheating-volume' (RV) measure promises to solve the long-standing problem of extracting probabilistic predictions from cosmological multiverse scenarios involving eternal inflation. I give a detailed description of the new measure and its applications to generic models of eternal inflation of random-walk type. For those models I derive a general formula for RV-regulated probability distributions that is suitable for numerical computations. I show that the results of the RV cutoff in random-walk type models are always gauge invariant and independent of the initial conditions at the beginning of inflation. In a toy model where equal-time cutoffs lead to themore » 'youngness paradox', the RV cutoff yields unbiased results that are distinct from previously proposed measures.« less
Extended inflation from higher dimensional theories
NASA Technical Reports Server (NTRS)
Holman, Richard; Kolb, Edward W.; Vadas, Sharon L.; Wang, Yun
1990-01-01
The possibility is considered that higher dimensional theories may, upon reduction to four dimensions, allow extended inflation to occur. Two separate models are analayzed. One is a very simple toy model consisting of higher dimensional gravity coupled to a scalar field whose potential allows for a first-order phase transition. The other is a more sophisticated model incorporating the effects of non-trivial field configurations (monopole, Casimir, and fermion bilinear condensate effects) that yield a non-trivial potential for the radius of the internal space. It was found that extended inflation does not occur in these models. It was also found that the bubble nucleation rate in these theories is time dependent unlike the case in the original version of extended inflation.
A preliminary structural analysis of space-based inflatable tubular frame structures
NASA Technical Reports Server (NTRS)
Main, John A.; Peterson, Steven W.; Strauss, Alvin M.
1992-01-01
The use of inflatable structures has often been proposed for aerospace and planetary applications. The advantages of such structures include low launch weight and easy assembly. The use of inflatables for applications requiring very large frame structures intended for aerospace use are proposed. In order to consider using an inflated truss, the structural behavior of the inflated frame must be examined. The statics of inflated tubes as beams was discussed in the literature, but the dynamics of these elements has not received much attention. In an effort to evaluate the vibration characteristics of the inflated beam a series of free vibration tests of an inflated fabric cantilevers were performed. Results of the tests are presented and models for system behavior posed.
Synchrotron X-ray imaging of pulmonary alveoli in respiration in live intact mice.
Chang, Soeun; Kwon, Namseop; Kim, Jinkyung; Kohmura, Yoshiki; Ishikawa, Tetsuya; Rhee, Chin Kook; Je, Jung Ho; Tsuda, Akira
2015-03-04
Despite nearly a half century of studies, it has not been fully understood how pulmonary alveoli, the elementary gas exchange units in mammalian lungs, inflate and deflate during respiration. Understanding alveolar dynamics is crucial for treating patients with pulmonary diseases. In-vivo, real-time visualization of the alveoli during respiration has been hampered by active lung movement. Previous studies have been therefore limited to alveoli at lung apices or subpleural alveoli under open thorax conditions. Here we report direct and real-time visualization of alveoli of live intact mice during respiration using tracking X-ray microscopy. Our studies, for the first time, determine the alveolar size of normal mice in respiration without positive end expiratory pressure as 58 ± 14 (mean ± s.d.) μm on average, accurately measured in the lung bases as well as the apices. Individual alveoli of normal lungs clearly show heterogeneous inflation from zero to ~25% (6.7 ± 4.7% (mean ± s.d.)) in size. The degree of inflation is higher in the lung bases (8.7 ± 4.3% (mean ± s.d.)) than in the apices (5.7 ± 3.2% (mean ± s.d.)). The fraction of the total tidal volume allocated for alveolar inflation is 34 ± 3.8% (mean ± s.e.m). This study contributes to the better understanding of alveolar dynamics and helps to develop potential treatment options for pulmonary diseases.
Modeling Psychological Contract Violation using Dual Regime Models: An Event-based Approach
Hofmans, Joeri
2017-01-01
A good understanding of the dynamics of psychological contract violation requires theories, research methods and statistical models that explicitly recognize that violation feelings follow from an event that violates one's acceptance limits, after which interpretative processes are set into motion, determining the intensity of these violation feelings. Whereas theories—in the form of the dynamic model of the psychological contract—and research methods—in the form of daily diary research and experience sampling research—are available by now, the statistical tools to model such a two-stage process are still lacking. The aim of the present paper is to fill this gap in the literature by introducing two statistical models—the Zero-Inflated model and the Hurdle model—that closely mimic the theoretical process underlying the elicitation violation feelings via two model components: a binary distribution that models whether violation has occurred or not, and a count distribution that models how severe the negative impact is. Moreover, covariates can be included for both model components separately, which yields insight into their unique and shared antecedents. By doing this, the present paper offers a methodological-substantive synergy, showing how sophisticated methodology can be used to examine an important substantive issue. PMID:29163316