NASA Astrophysics Data System (ADS)
Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni
2017-12-01
Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Zero-truncated negative binomial - Erlang distribution
NASA Astrophysics Data System (ADS)
Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana
2017-11-01
The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.
Tran, Phoebe; Waller, Lance
2015-01-01
Lyme disease has been the subject of many studies due to increasing incidence rates year after year and the severe complications that can arise in later stages of the disease. Negative binomial models have been used to model Lyme disease in the past with some success. However, there has been little focus on the reliability and consistency of these models when they are used to study Lyme disease at multiple spatial scales. This study seeks to explore how sensitive/consistent negative binomial models are when they are used to study Lyme disease at different spatial scales (at the regional and sub-regional levels). The study area includes the thirteen states in the Northeastern United States with the highest Lyme disease incidence during the 2002-2006 period. Lyme disease incidence at county level for the period of 2002-2006 was linked with several previously identified key landscape and climatic variables in a negative binomial regression model for the Northeastern region and two smaller sub-regions (the New England sub-region and the Mid-Atlantic sub-region). This study found that negative binomial models, indeed, were sensitive/inconsistent when used at different spatial scales. We discuss various plausible explanations for such behavior of negative binomial models. Further investigation of the inconsistency and sensitivity of negative binomial models when used at different spatial scales is important for not only future Lyme disease studies and Lyme disease risk assessment/management but any study that requires use of this model type in a spatial context. Copyright © 2014 Elsevier Inc. All rights reserved.
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
Distinguishing between Binomial, Hypergeometric and Negative Binomial Distributions
ERIC Educational Resources Information Center
Wroughton, Jacqueline; Cole, Tarah
2013-01-01
Recognizing the differences between three discrete distributions (Binomial, Hypergeometric and Negative Binomial) can be challenging for students. We present an activity designed to help students differentiate among these distributions. In addition, we present assessment results in the form of pre- and post-tests that were designed to assess the…
Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)
NASA Astrophysics Data System (ADS)
Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi
2017-06-01
Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
Marginalized zero-inflated negative binomial regression with application to dental caries
Preisser, John S.; Das, Kalyan; Long, D. Leann; Divaris, Kimon
2015-01-01
The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared to marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
The Difference Calculus and The NEgative Binomial Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o; Shenton, LR
2007-01-01
In a previous paper we state the dominant term in the third central moment of the maximum likelihood estimator k of the parameter k in the negative binomial probability function where the probability generating function is (p + 1 - pt){sup -k}. A partial sum of the series {Sigma}1/(k + x){sup 3} is involved, where x is a negative binomial random variate. In expectation this sum can only be found numerically using the computer. Here we give a simple definite integral in (0,1) for the generalized case. This means that now we do have a valid expression for {radical}{beta}{sub 11}(k)more » and {radical}{beta}{sub 11}(p). In addition we use the finite difference operator {Delta}, and E = 1 + {Delta} to set up formulas for low order moments. Other examples of the operators are quoted relating to the orthogonal set of polynomials associated with the negative binomial probability function used as a weight function.« less
Justin S. Crotteau; Martin W. Ritchie; J. Morgan Varner
2014-01-01
Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chhaiba, Hassan, E-mail: chhaiba.hassan@gmail.com; Demni, Nizar, E-mail: nizar.demni@univ-rennes1.fr; Mouayn, Zouhair, E-mail: mouayn@fstbm.ac.ma
2016-07-15
To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering themore » total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.« less
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.
2013-01-01
Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.
Forecasting asthma-related hospital admissions in London using negative binomial models.
Soyiri, Ireneous N; Reidpath, Daniel D; Sarran, Christophe
2013-05-01
Health forecasting can improve health service provision and individual patient outcomes. Environmental factors are known to impact chronic respiratory conditions such as asthma, but little is known about the extent to which these factors can be used for forecasting. Using weather, air quality and hospital asthma admissions, in London (2005-2006), two related negative binomial models were developed and compared with a naive seasonal model. In the first approach, predictive forecasting models were fitted with 7-day averages of each potential predictor, and then a subsequent multivariable model is constructed. In the second strategy, an exhaustive search of the best fitting models between possible combinations of lags (0-14 days) of all the environmental effects on asthma admission was conducted. Three models were considered: a base model (seasonal effects), contrasted with a 7-day average model and a selected lags model (weather and air quality effects). Season is the best predictor of asthma admissions. The 7-day average and seasonal models were trivial to implement. The selected lags model was computationally intensive, but of no real value over much more easily implemented models. Seasonal factors can predict daily hospital asthma admissions in London, and there is a little evidence that additional weather and air quality information would add to forecast accuracy.
Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan
2011-11-01
To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.
Shirazi, Mohammadali; Dhavala, Soma Sekhar; Lord, Dominique; Geedipally, Srinivas Reddy
2017-10-01
Safety analysts usually use post-modeling methods, such as the Goodness-of-Fit statistics or the Likelihood Ratio Test, to decide between two or more competitive distributions or models. Such metrics require all competitive distributions to be fitted to the data before any comparisons can be accomplished. Given the continuous growth in introducing new statistical distributions, choosing the best one using such post-modeling methods is not a trivial task, in addition to all theoretical or numerical issues the analyst may face during the analysis. Furthermore, and most importantly, these measures or tests do not provide any intuitions into why a specific distribution (or model) is preferred over another (Goodness-of-Logic). This paper ponders into these issues by proposing a methodology to design heuristics for Model Selection based on the characteristics of data, in terms of descriptive summary statistics, before fitting the models. The proposed methodology employs two analytic tools: (1) Monte-Carlo Simulations and (2) Machine Learning Classifiers, to design easy heuristics to predict the label of the 'most-likely-true' distribution for analyzing data. The proposed methodology was applied to investigate when the recently introduced Negative Binomial Lindley (NB-L) distribution is preferred over the Negative Binomial (NB) distribution. Heuristics were designed to select the 'most-likely-true' distribution between these two distributions, given a set of prescribed summary statistics of data. The proposed heuristics were successfully compared against classical tests for several real or observed datasets. Not only they are easy to use and do not need any post-modeling inputs, but also, using these heuristics, the analyst can attain useful information about why the NB-L is preferred over the NB - or vice versa- when modeling data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Poisson and negative binomial item count techniques for surveys with sensitive question.
Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin
2017-04-01
Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.
NASA Astrophysics Data System (ADS)
Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badełek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Ftáčnik, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffré, M.; Jachołkowska, A.; Janata, F.; Jancsó, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettinghale, J.; Pietrzyk, B.; Pietrzyk, U.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Schneider, A.; Scholz, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.
1987-09-01
The multiplicity distributions of charged hadrons produced in the deep inelastic muon-proton scattering at 280 GeV are analysed in various rapidity intervals, as a function of the total hadronic centre of mass energy W ranging from 4 20 GeV. Multiplicity distributions for the backward and forward hemispheres are also analysed separately. The data can be well parameterized by binomial distributions, extending their range of applicability to the case of lepton-proton scattering. The energy and the rapidity dependence of the parameters is presented and a smooth transition from the negative binomial distribution via Poissonian to the ordinary binomial is observed.
Harrison, Xavier A
2015-01-01
Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels), I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data
Zheng, Han; Kimber, Alan; Goodwin, Victoria A; Pickering, Ruth M
2018-01-01
A common design for a falls prevention trial is to assess falling at baseline, randomize participants into an intervention or control group, and ask them to record the number of falls they experience during a follow-up period of time. This paper addresses how best to include the baseline count in the analysis of the follow-up count of falls in negative binomial (NB) regression. We examine the performance of various approaches in simulated datasets where both counts are generated from a mixed Poisson distribution with shared random subject effect. Including the baseline count after log-transformation as a regressor in NB regression (NB-logged) or as an offset (NB-offset) resulted in greater power than including the untransformed baseline count (NB-unlogged). Cook and Wei's conditional negative binomial (CNB) model replicates the underlying process generating the data. In our motivating dataset, a statistically significant intervention effect resulted from the NB-logged, NB-offset, and CNB models, but not from NB-unlogged, and large, outlying baseline counts were overly influential in NB-unlogged but not in NB-logged. We conclude that there is little to lose by including the log-transformed baseline count in standard NB regression compared to CNB for moderate to larger sized datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.
Mi, Gu; Di, Yanming; Schafer, Daniel W
2015-01-01
This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.
Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia
2017-08-31
As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.
Lee, JuHee; Park, Chang Gi; Choi, Moonki
2016-05-01
This study was conducted to identify risk factors that influence regular exercise among patients with Parkinson's disease in Korea. Parkinson's disease is prevalent in the elderly, and may lead to a sedentary lifestyle. Exercise can enhance physical and psychological health. However, patients with Parkinson's disease are less likely to exercise than are other populations due to physical disability. A secondary data analysis and cross-sectional descriptive study were conducted. A convenience sample of 106 patients with Parkinson's disease was recruited at an outpatient neurology clinic of a tertiary hospital in Korea. Demographic characteristics, disease-related characteristics (including disease duration and motor symptoms), self-efficacy for exercise, balance, and exercise level were investigated. Negative binomial regression and zero-inflated negative binomial regression for exercise count data were utilized to determine factors involved in exercise. The mean age of participants was 65.85 ± 8.77 years, and the mean duration of Parkinson's disease was 7.23 ± 6.02 years. Most participants indicated that they engaged in regular exercise (80.19%). Approximately half of participants exercised at least 5 days per week for 30 min, as recommended (51.9%). Motor symptoms were a significant predictor of exercise in the count model, and self-efficacy for exercise was a significant predictor of exercise in the zero model. Severity of motor symptoms was related to frequency of exercise. Self-efficacy contributed to the probability of exercise. Symptom management and improvement of self-efficacy for exercise are important to encourage regular exercise in patients with Parkinson's disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Modeling avian abundance from replicated counts using binomial mixture models
Kery, Marc; Royle, J. Andrew; Schmid, Hans
2005-01-01
Abundance estimation in ecology is usually accomplished by capture–recapture, removal, or distance sampling methods. These may be hard to implement at large spatial scales. In contrast, binomial mixture models enable abundance estimation without individual identification, based simply on temporally and spatially replicated counts. Here, we evaluate mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season. We chose eight species with contrasting distribution (wide–narrow), abundance (high–low), and detectability (easy–difficult). Abundance was modeled as a random effect with a Poisson or negative binomial distribution, with mean affected by forest cover, elevation, and route length. Detectability was a logit-linear function of survey date, survey date-by-elevation, and sampling effort (time per transect unit). Resulting covariate effects and parameter estimates were consistent with expectations. Detectability per territory (for three surveys) ranged from 0.66 to 0.94 (mean 0.84) for easy species, and from 0.16 to 0.83 (mean 0.53) for difficult species, depended on survey effort for two easy and all four difficult species, and changed seasonally for three easy and three difficult species. Abundance was positively related to route length in three high-abundance and one low-abundance (one easy and three difficult) species, and increased with forest cover in five forest species, decreased for two nonforest species, and was unaffected for a generalist species. Abundance estimates under the most parsimonious mixture models were between 1.1 and 8.9 (median 1.8) times greater than estimates based on territory mapping; hence, three surveys were insufficient to detect all territories for each species. We conclude that binomial mixture models are an important new approach for estimating abundance corrected for
Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen
2017-01-01
Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.
Use of negative binomial distribution to describe the presence of Anisakis in Thyrsites atun.
Peña-Rehbein, Patricio; De los Ríos-Escalante, Patricio
2012-01-01
Nematodes of the genus Anisakis have marine fishes as intermediate hosts. One of these hosts is Thyrsites atun, an important fishery resource in Chile between 38 and 41° S. This paper describes the frequency and number of Anisakis nematodes in the internal organs of Thyrsites atun. An analysis based on spatial distribution models showed that the parasites tend to be clustered. The variation in the number of parasites per host could be described by the negative binomial distribution. The maximum observed number of parasites was nine parasites per host. The environmental and zoonotic aspects of the study are also discussed.
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
Bayesian inference for disease prevalence using negative binomial group testing
Pritchard, Nicholas A.; Tebbs, Joshua M.
2011-01-01
Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308
Robust inference in the negative binomial regression model with an application to falls data.
Aeberhard, William H; Cantoni, Eva; Heritier, Stephane
2014-12-01
A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.
Speech-discrimination scores modeled as a binomial variable.
Thornton, A R; Raffin, M J
1978-09-01
Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.
Bilgic, Abdulbaki; Florkowski, Wojciech J
2007-06-01
This paper identifies factors that influence the demand for a bass fishing trip taken in the southeastern United States using a hurdle negative binomial count data model. The probability of fishing for a bass is estimated in the first stage and the fishing trip frequency is estimated in the second stage for individuals reporting bass fishing trips in the Southeast. The applied approach allows the decomposition of the effects of factors responsible for the decision to take a trip and the trip number. Calculated partial and total elasticities indicate a highly inelastic demand for the number of fishing trips as trip costs increase. However, the demand can be expected to increase if anglers experience a success measured by the number of caught fish or their size. Benefit estimates based on alternative estimation methods differ substantially, suggesting the need for testing each modeling approach applied in empirical studies.
On the p, q-binomial distribution and the Ising model
NASA Astrophysics Data System (ADS)
Lundow, P. H.; Rosengren, A.
2010-08-01
We employ p, q-binomial coefficients, a generalisation of the binomial coefficients, to describe the magnetisation distributions of the Ising model. For the complete graph this distribution corresponds exactly to the limit case p = q. We apply our investigation to the simple d-dimensional lattices for d = 1, 2, 3, 4, 5 and fit p, q-binomial distributions to our data, some of which are exact but most are sampled. For d = 1 and d = 5, the magnetisation distributions are remarkably well-fitted by p,q-binomial distributions. For d = 4 we are only slightly less successful, while for d = 2, 3 we see some deviations (with exceptions!) between the p, q-binomial and the Ising distribution. However, at certain temperatures near T c the statistical moments of the fitted distribution agree with the moments of the sampled data within the precision of sampling. We begin the paper by giving results of the behaviour of the p, q-distribution and its moment growth exponents given a certain parameterisation of p, q. Since the moment exponents are known for the Ising model (or at least approximately for d = 3) we can predict how p, q should behave and compare this to our measured p, q. The results speak in favour of the p, q-binomial distribution's correctness regarding its general behaviour in comparison to the Ising model. The full extent to which they correctly model the Ising distribution, however, is not settled.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.
He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L
2015-10-01
Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
Statistical methods for the beta-binomial model in teratology.
Yamamoto, E; Yanagimoto, T
1994-01-01
The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716
Library Book Circulation and the Beta-Binomial Distribution.
ERIC Educational Resources Information Center
Gelman, E.; Sichel, H. S.
1987-01-01
Argues that library book circulation is a binomial rather than a Poisson process, and that individual book popularities are continuous beta distributions. Three examples demonstrate the superiority of beta over negative binomial distribution, and it is suggested that a bivariate-binomial process would be helpful in predicting future book…
Rusli, Rusdi; Haque, Md Mazharul; King, Mark; Voon, Wong Shaw
2017-05-01
Mountainous highways generally associate with complex driving environment because of constrained road geometries, limited cross-section elements, inappropriate roadside features, and adverse weather conditions. As a result, single-vehicle (SV) crashes are overrepresented along mountainous roads, particularly in developing countries, but little attention is known about the roadway geometric, traffic and weather factors contributing to these SV crashes. As such, the main objective of the present study is to investigate SV crashes using detailed data obtained from a rigorous site survey and existing databases. The final dataset included a total of 56 variables representing road geometries including horizontal and vertical alignment, traffic characteristics, real-time weather condition, cross-sectional elements, roadside features, and spatial characteristics. To account for structured heterogeneities resulting from multiple observations within a site and other unobserved heterogeneities, the study applied a random parameters negative binomial model. Results suggest that rainfall during the crash is positively associated with SV crashes, but real-time visibility is negatively associated. The presence of a road shoulder, particularly a bitumen shoulder or wider shoulders, along mountainous highways is associated with less SV crashes. While speeding along downgrade slopes increases the likelihood of SV crashes, proper delineation decreases the likelihood. Findings of this study have significant implications for designing safer highways in mountainous areas, particularly in the context of a developing country. Copyright © 2017 Elsevier Ltd. All rights reserved.
On Models for Binomial Data with Random Numbers of Trials
Comulada, W. Scott; Weiss, Robert E.
2010-01-01
Summary A binomial outcome is a count s of the number of successes out of the total number of independent trials n = s + f, where f is a count of the failures. The n are random variables not fixed by design in many studies. Joint modeling of (s, f) can provide additional insight into the science and into the probability π of success that cannot be directly incorporated by the logistic regression model. Observations where n = 0 are excluded from the binomial analysis yet may be important to understanding how π is influenced by covariates. Correlation between s and f may exist and be of direct interest. We propose Bayesian multivariate Poisson models for the bivariate response (s, f), correlated through random effects. We extend our models to the analysis of longitudinal and multivariate longitudinal binomial outcomes. Our methodology was motivated by two disparate examples, one from teratology and one from an HIV tertiary intervention study. PMID:17688514
Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan
2014-09-01
Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sheu, Mei-Ling; Hu, Teh-Wei; Keeler, Theodore E; Ong, Michael; Sung, Hai-Yen
2004-08-01
The objective of this paper is to determine the price sensitivity of smokers in their consumption of cigarettes, using evidence from a major increase in California cigarette prices due to Proposition 10 and the Tobacco Settlement. The study sample consists of individual survey data from Behavioral Risk Factor Survey (BRFS) and price data from the Bureau of Labor Statistics between 1996 and 1999. A zero-inflated negative binomial (ZINB) regression model was applied for the statistical analysis. The statistical model showed that price did not have an effect on reducing the estimated prevalence of smoking. However, it indicated that among smokers the price elasticity was at the level of -0.46 and statistically significant. Since smoking prevalence is significantly lower than it was a decade ago, price increases are becoming less effective as an inducement for hard-core smokers to quit, although they may respond by decreasing consumption. For those who only smoke occasionally (many of them being young adults) price increases alone may not be an effective inducement to quit smoking. Additional underlying behavioral factors need to be identified so that more effective anti-smoking strategies can be developed.
Binomial test statistics using Psi functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o
2007-01-01
For the negative binomial model (probability generating function (p + 1 - pt){sup -k}) a logarithmic derivative is the Psi function difference {psi}(k + x) - {psi}(k); this and its derivatives lead to a test statistic to decide on the validity of a specified model. The test statistic uses a data base so there exists a comparison available between theory and application. Note that the test function is not dominated by outliers. Applications to (i) Fisher's tick data, (ii) accidents data, (iii) Weldon's dice data are included.
On extinction time of a generalized endemic chain-binomial model.
Aydogmus, Ozgur
2016-09-01
We considered a chain-binomial epidemic model not conferring immunity after infection. Mean field dynamics of the model has been analyzed and conditions for the existence of a stable endemic equilibrium are determined. The behavior of the chain-binomial process is probabilistically linked to the mean field equation. As a result of this link, we were able to show that the mean extinction time of the epidemic increases at least exponentially as the population size grows. We also present simulation results for the process to validate our analytical findings. Copyright © 2016 Elsevier Inc. All rights reserved.
Discrimination of numerical proportions: A comparison of binomial and Gaussian models.
Raidvee, Aire; Lember, Jüri; Allik, Jüri
2017-01-01
Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.
Statistical inference for time course RNA-Seq data using a negative binomial mixed-effect model.
Sun, Xiaoxiao; Dalpiaz, David; Wu, Di; S Liu, Jun; Zhong, Wenxuan; Ma, Ping
2016-08-26
Accurate identification of differentially expressed (DE) genes in time course RNA-Seq data is crucial for understanding the dynamics of transcriptional regulatory network. However, most of the available methods treat gene expressions at different time points as replicates and test the significance of the mean expression difference between treatments or conditions irrespective of time. They thus fail to identify many DE genes with different profiles across time. In this article, we propose a negative binomial mixed-effect model (NBMM) to identify DE genes in time course RNA-Seq data. In the NBMM, mean gene expression is characterized by a fixed effect, and time dependency is described by random effects. The NBMM is very flexible and can be fitted to both unreplicated and replicated time course RNA-Seq data via a penalized likelihood method. By comparing gene expression profiles over time, we further classify the DE genes into two subtypes to enhance the understanding of expression dynamics. A significance test for detecting DE genes is derived using a Kullback-Leibler distance ratio. Additionally, a significance test for gene sets is developed using a gene set score. Simulation analysis shows that the NBMM outperforms currently available methods for detecting DE genes and gene sets. Moreover, our real data analysis of fruit fly developmental time course RNA-Seq data demonstrates the NBMM identifies biologically relevant genes which are well justified by gene ontology analysis. The proposed method is powerful and efficient to detect biologically relevant DE genes and gene sets in time course RNA-Seq data.
Four Bootstrap Confidence Intervals for the Binomial-Error Model.
ERIC Educational Resources Information Center
Lin, Miao-Hsiang; Hsiung, Chao A.
1992-01-01
Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)
Selecting Tools to Model Integer and Binomial Multiplication
ERIC Educational Resources Information Center
Pratt, Sarah Smitherman; Eddy, Colleen M.
2017-01-01
Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…
Using beta binomials to estimate classification uncertainty for ensemble models.
Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin
2014-01-01
Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent
Analysis of railroad tank car releases using a generalized binomial model.
Liu, Xiang; Hong, Yili
2015-11-01
The United States is experiencing an unprecedented boom in shale oil production, leading to a dramatic growth in petroleum crude oil traffic by rail. In 2014, U.S. railroads carried over 500,000 tank carloads of petroleum crude oil, up from 9500 in 2008 (a 5300% increase). In light of continual growth in crude oil by rail, there is an urgent national need to manage this emerging risk. This need has been underscored in the wake of several recent crude oil release incidents. In contrast to highway transport, which usually involves a tank trailer, a crude oil train can carry a large number of tank cars, having the potential for a large, multiple-tank-car release incident. Previous studies exclusively assumed that railroad tank car releases in the same train accident are mutually independent, thereby estimating the number of tank cars releasing given the total number of tank cars derailed based on a binomial model. This paper specifically accounts for dependent tank car releases within a train accident. We estimate the number of tank cars releasing given the number of tank cars derailed based on a generalized binomial model. The generalized binomial model provides a significantly better description for the empirical tank car accident data through our numerical case study. This research aims to provide a new methodology and new insights regarding the further development of risk management strategies for improving railroad crude oil transportation safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chan, Ta-Chien; Teng, Yung-Chu; Hwang, Jing-Shiang
2015-02-21
Emerging novel influenza outbreaks have increasingly been a threat to the public and a major concern of public health departments. Real-time data in seamless surveillance systems such as health insurance claims data for influenza-like illnesses (ILI) are ready for analysis, making it highly desirable to develop practical techniques to analyze such readymade data for outbreak detection so that the public can receive timely influenza epidemic warnings. This study proposes a simple and effective approach to analyze area-based health insurance claims data including outpatient and emergency department (ED) visits for early detection of any aberrations of ILI. The health insurance claims data during 2004-2009 from a national health insurance research database were used for developing early detection methods. The proposed approach fitted the daily new ILI visits and monitored the Pearson residuals directly for aberration detection. First, negative binomial regression was used for both outpatient and ED visits to adjust for potentially influential factors such as holidays, weekends, seasons, temporal dependence and temperature. Second, if the Pearson residuals exceeded 1.96, aberration signals were issued. The empirical validation of the model was done in 2008 and 2009. In addition, we designed a simulation study to compare the time of outbreak detection, non-detection probability and false alarm rate between the proposed method and modified CUSUM. The model successfully detected the aberrations of 2009 pandemic (H1N1) influenza virus in northern, central and southern Taiwan. The proposed approach was more sensitive in identifying aberrations in ED visits than those in outpatient visits. Simulation studies demonstrated that the proposed approach could detect the aberrations earlier, and with lower non-detection probability and mean false alarm rate in detecting aberrations compared to modified CUSUM methods. The proposed simple approach was able to filter out temporal
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Kim, Dae-Hwan; Ramjan, Lucie M; Mak, Kwok-Kei
2016-01-01
Traffic safety is a significant public health challenge, and vehicle crashes account for the majority of injuries. This study aims to identify whether drivers' characteristics and past traffic violations may predict vehicle crashes in Korea. A total of 500,000 drivers were randomly selected from the 11.6 million driver records of the Ministry of Land, Transport and Maritime Affairs in Korea. Records of traffic crashes were obtained from the archives of the Korea Insurance Development Institute. After matching the past violation history for the period 2004-2005 with the number of crashes in year 2006, a total of 488,139 observations were used for the analysis. Zero-inflated negative binomial model was used to determine the incident risk ratio (IRR) of vehicle crashes by past violations of individual drivers. The included covariates were driver's age, gender, district of residence, vehicle choice, and driving experience. Drivers violating (1) a hit-and-run or drunk driving regulation at least once and (2) a signal, central line, or speed regulation more than once had a higher risk of a vehicle crash with respective IRRs of 1.06 and 1.15. Furthermore, female gender, a younger age, fewer years of driving experience, and middle-sized vehicles were all significantly associated with a higher likelihood of vehicle crashes. Drivers' demographic characteristics and past traffic violations could predict vehicle crashes in Korea. Greater resources should be assigned to the provision of traffic safety education programs for the high-risk driver groups.
Binomial leap methods for simulating stochastic chemical kinetics.
Tian, Tianhai; Burrage, Kevin
2004-12-01
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.
CUMBIN - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.
Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2018-01-01
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
Wei, Feng; Lovegrove, Gordon
2013-12-01
Today, North American governments are more willing to consider compact neighborhoods with increased use of sustainable transportation modes. Bicycling, one of the most effective modes for short trips with distances less than 5km is being encouraged. However, as vulnerable road users (VRUs), cyclists are more likely to be injured when involved in collisions. In order to create a safe road environment for them, evaluating cyclists' road safety at a macro level in a proactive way is necessary. In this paper, different generalized linear regression methods for collision prediction model (CPM) development are reviewed and previous studies on micro-level and macro-level bicycle-related CPMs are summarized. On the basis of insights gained in the exploration stage, this paper also reports on efforts to develop negative binomial models for bicycle-auto collisions at a community-based, macro-level. Data came from the Central Okanagan Regional District (CORD), of British Columbia, Canada. The model results revealed two types of statistical associations between collisions and each explanatory variable: (1) An increase in bicycle-auto collisions is associated with an increase in total lane kilometers (TLKM), bicycle lane kilometers (BLKM), bus stops (BS), traffic signals (SIG), intersection density (INTD), and arterial-local intersection percentage (IALP). (2) A decrease in bicycle collisions was found to be associated with an increase in the number of drive commuters (DRIVE), and in the percentage of drive commuters (DRP). These results support our hypothesis that in North America, with its current low levels of bicycle use (<4%), we can initially expect to see an increase in bicycle collisions as cycle mode share increases. However, as bicycle mode share increases beyond some unknown 'critical' level, our hypothesis also predicts a net safety improvement. To test this hypothesis and to further explore the statistical relationships between bicycle mode split and overall road
A Three-Parameter Generalisation of the Beta-Binomial Distribution with Applications
1987-07-01
York. Rust, R.T. and Klompmaker, J.E. (1981). Improving the estimation procedure for the beta binomial t.v. exposure model. Journal of Marketing ... Research . 18, 442-448. Sabavala, D.J. and Morrison, D.G. (1977). Television show loyalty: a beta- binomial model using recall data. Journal of Advertiuing
Design and analysis of three-arm trials with negative binomially distributed endpoints.
Mütze, Tobias; Munk, Axel; Friede, Tim
2016-02-20
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.
A Negative Binomial Regression Model for Accuracy Tests
ERIC Educational Resources Information Center
Hung, Lai-Fa
2012-01-01
Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…
ERIC Educational Resources Information Center
Levin, Eugene M.
1981-01-01
Student access to programmable calculators and computer terminals, coupled with a familiarity with baseball, provides opportunities to enhance their understanding of the binomial distribution and other aspects of analysis. (MP)
O’Donnell, Katherine M.; Thompson, Frank R.; Semlitsch, Raymond D.
2015-01-01
Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model’s potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3–5 surveys each spring and fall 2010–2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that
The Binomial Model in Fluctuation Analysis of Quantal Neurotransmitter Release
Quastel, D. M. J.
1997-01-01
The mathematics of the binomial model for quantal neurotransmitter release is considered in general terms, to explore what information might be extractable from statistical aspects of data. For an array of N statistically independent release sites, each with a release probability p, the compound binomial always pertains, with  , p′ ≡ 1 - var(m)/  (1 + cvp2) and n′ ≡  2. Unless n′ is invariant with ambient conditions or stimulation paradigms, the simple binomial (cvp = 0) is untenable and n′ is neither N nor the number of “active” sites or sites with a quantum available. At each site p = popA, where po is the output probability if a site is “eligible” or “filled” despite previous quantal discharge, and pA (eligibility probability) depends at least on the replenishment rate, po, and interstimulus time. Assuming stochastic replenishment, a simple algorithm allows calculation of the full statistical composition of outputs for any hypothetical combinations of po's and refill rates, for any stimulation paradigm and spontaneous release. A rise in n′ (reduced cvp) tends to occur whenever po varies widely between sites, with a raised stimulation frequency or factors tending to increase po's. Unlike 
I Remember You: Independence and the Binomial Model
ERIC Educational Resources Information Center
Levine, Douglas W.; Rockhill, Beverly
2006-01-01
We focus on the problem of ignoring statistical independence. A binomial experiment is used to determine whether judges could match, based on looks alone, dogs to their owners. The experimental design introduces dependencies such that the probability of a given judge correctly matching a dog and an owner changes from trial to trial. We show how…
Poulin, Robert; Lagrue, Clément
2017-01-01
The spatial distribution of individuals of any species is a basic concern of ecology. The spatial distribution of parasites matters to control and conservation of parasites that affect human and nonhuman populations. This paper develops a quantitative theory to predict the spatial distribution of parasites based on the distribution of parasites in hosts and the spatial distribution of hosts. Four models are tested against observations of metazoan hosts and their parasites in littoral zones of four lakes in Otago, New Zealand. These models differ in two dichotomous assumptions, constituting a 2 × 2 theoretical design. One assumption specifies whether the variance function of the number of parasites per host individual is described by Taylor's law (TL) or the negative binomial distribution (NBD). The other assumption specifies whether the numbers of parasite individuals within each host in a square meter of habitat are independent or perfectly correlated among host individuals. We find empirically that the variance–mean relationship of the numbers of parasites per square meter is very well described by TL but is not well described by NBD. Two models that posit perfect correlation of the parasite loads of hosts in a square meter of habitat approximate observations much better than two models that posit independence of parasite loads of hosts in a square meter, regardless of whether the variance–mean relationship of parasites per host individual obeys TL or NBD. We infer that high local interhost correlations in parasite load strongly influence the spatial distribution of parasites. Local hotspots could influence control and conservation of parasites. PMID:27994156
Problems on Divisibility of Binomial Coefficients
ERIC Educational Resources Information Center
Osler, Thomas J.; Smoak, James
2004-01-01
Twelve unusual problems involving divisibility of the binomial coefficients are represented in this article. The problems are listed in "The Problems" section. All twelve problems have short solutions which are listed in "The Solutions" section. These problems could be assigned to students in any course in which the binomial theorem and Pascal's…
Alekseeva, N P; Alekseev, A O; Vakhtin, Iu B; Kravtsov, V Iu; Kuzovatov, S N; Skorikova, T I
2008-01-01
Distributions of nuclear morphology anomalies in transplantable rabdomiosarcoma RA-23 cell populations were investigated under effect of ionizing radiation from 0 to 45 Gy. Internuclear bridges, nuclear protrusions and dumbbell-shaped nuclei were accepted for morphological anomalies. Empirical distributions of the number of anomalies per 100 nuclei were used. The adequate model of reentrant binomial distribution has been found. The sum of binomial random variables with binomial number of summands has such distribution. Averages of these random variables were named, accordingly, internal and external average reentrant components. Their maximum likelihood estimations were received. Statistical properties of these estimations were investigated by means of statistical modeling. It has been received that at equally significant correlation between the radiation dose and the average of nuclear anomalies in cell populations after two-three cellular cycles from the moment of irradiation in vivo the irradiation doze significantly correlates with internal average reentrant component, and in remote descendants of cell transplants irradiated in vitro - with external one.
Application of binomial-edited CPMG to shale characterization
Washburn, Kathryn E.; Birdwell, Justin E.
2014-01-01
Unconventional shale resources may contain a significant amount of hydrogen in organic solids such as kerogen, but it is not possible to directly detect these solids with many NMR systems. Binomial-edited pulse sequences capitalize on magnetization transfer between solids, semi-solids, and liquids to provide an indirect method of detecting solid organic materials in shales. When the organic solids can be directly measured, binomial-editing helps distinguish between different phases. We applied a binomial-edited CPMG pulse sequence to a range of natural and experimentally-altered shale samples. The most substantial signal loss is seen in shales rich in organic solids while fluids associated with inorganic pores seem essentially unaffected. This suggests that binomial-editing is a potential method for determining fluid locations, solid organic content, and kerogen–bitumen discrimination.
Abstract knowledge versus direct experience in processing of binomial expressions
Morgan, Emily; Levy, Roger
2016-01-01
We ask whether word order preferences for binomial expressions of the form A and B (e.g. bread and butter) are driven by abstract linguistic knowledge of ordering constraints referencing the semantic, phonological, and lexical properties of the constituent words, or by prior direct experience with the specific items in questions. Using forced-choice and self-paced reading tasks, we demonstrate that online processing of never-before-seen binomials is influenced by abstract knowledge of ordering constraints, which we estimate with a probabilistic model. In contrast, online processing of highly frequent binomials is primarily driven by direct experience, which we estimate from corpus frequency counts. We propose a trade-off wherein processing of novel expressions relies upon abstract knowledge, while reliance upon direct experience increases with increased exposure to an expression. Our findings support theories of language processing in which both compositional generation and direct, holistic reuse of multi-word expressions play crucial roles. PMID:27776281
Martínez-Ferrer, María Teresa; Ripollés, José Luís; Garcia-Marí, Ferran
2006-06-01
The spatial distribution of the citrus mealybug, Planococcus citri (Risso) (Homoptera: Pseudococcidae), was studied in citrus groves in northeastern Spain. Constant precision sampling plans were designed for all developmental stages of citrus mealybug under the fruit calyx, for late stages on fruit, and for females on trunks and main branches; more than 66, 286, and 101 data sets, respectively, were collected from nine commercial fields during 1992-1998. Dispersion parameters were determined using Taylor's power law, giving aggregated spatial patterns for citrus mealybug populations in three locations of the tree sampled. A significant relationship between the number of insects per organ and the percentage of occupied organs was established using either Wilson and Room's binomial model or Kono and Sugino's empirical formula. Constant precision (E = 0.25) sampling plans (i.e., enumerative plans) for estimating mean densities were developed using Green's equation and the two binomial models. For making management decisions, enumerative counts may be less labor-intensive than binomial sampling. Therefore, we recommend enumerative sampling plans for the use in an integrated pest management program in citrus. Required sample sizes for the range of population densities near current management thresholds, in the three plant locations calyx, fruit, and trunk were 50, 110-330, and 30, respectively. Binomial sampling, especially the empirical model, required a higher sample size to achieve equivalent levels of precision.
Zero adjusted models with applications to analysing helminths count data.
Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N
2014-11-27
It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.
Lytras, Theodore; Georgakopoulou, Theano; Tsiodras, Sotirios
2018-04-01
Greece is currently experiencing a large measles outbreak, in the context of multiple similar outbreaks across Europe. We devised and applied a modified chain-binomial epidemic model, requiring very simple data, to estimate the transmission parameters of this outbreak. Model results indicate sustained measles transmission among the Greek Roma population, necessitating a targeted mass vaccination campaign to halt further spread of the epidemic. Our model may be useful for other countries facing similar measles outbreaks.
Lytras, Theodore; Georgakopoulou, Theano; Tsiodras, Sotirios
2018-01-01
Greece is currently experiencing a large measles outbreak, in the context of multiple similar outbreaks across Europe. We devised and applied a modified chain-binomial epidemic model, requiring very simple data, to estimate the transmission parameters of this outbreak. Model results indicate sustained measles transmission among the Greek Roma population, necessitating a targeted mass vaccination campaign to halt further spread of the epidemic. Our model may be useful for other countries facing similar measles outbreaks. PMID:29717695
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed
2016-08-01
This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Binomial Distribution in Shooting
ERIC Educational Resources Information Center
Chalikias, Miltiadis S.
2009-01-01
The binomial distribution is used to predict the winner of the 49th International Shooting Sport Federation World Championship in double trap shooting held in 2006 in Zagreb, Croatia. The outcome of the competition was definitely unexpected.
Lara, Jesus R; Hoddle, Mark S
2015-08-01
Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Use of the binomial distribution to predict impairment: application in a nonclinical sample.
Axelrod, Bradley N; Wall, Jacqueline R; Estes, Bradley W
2008-01-01
A mathematical model based on the binomial theory was developed to illustrate when abnormal score variations occur by chance in a multitest battery (Ingraham & Aiken, 1996). It has been successfully used as a comparison for obtained test scores in clinical samples, but not in nonclinical samples. In the current study, this model has been applied to demographically corrected scores on the Halstead-Reitan Neuropsychological Test Battery, obtained from a sample of 94 nonclinical college students. Results found that 15% of the sample had impairments suggested by the Halstead Impairment Index, using criteria established by Reitan and Wolfson (1993). In addition, one-half of the sample obtained impaired scores on one or two tests. These results were compared to that predicted by the binomial model and found to be consistent. The model therefore serves as a useful resource for clinicians considering the probability of impaired test performance.
Turi, Christina E; Murch, Susan J
2013-07-09
Ethnobotanical research and the study of plants used for rituals, ceremonies and to connect with the spirit world have led to the discovery of many novel psychoactive compounds such as nicotine, caffeine, and cocaine. In North America, spiritual and ceremonial uses of plants are well documented and can be accessed online via the University of Michigan's Native American Ethnobotany Database. The objective of the study was to compare Residual, Bayesian, Binomial and Imprecise Dirichlet Model (IDM) analyses of ritual, ceremonial and spiritual plants in Moerman's ethnobotanical database and to identify genera that may be good candidates for the discovery of novel psychoactive compounds. The database was queried with the following format "Family Name AND Ceremonial OR Spiritual" for 263 North American botanical families. Spiritual and ceremonial flora consisted of 86 families with 517 species belonging to 292 genera. Spiritual taxa were then grouped further into ceremonial medicines and items categories. Residual, Bayesian, Binomial and IDM analysis were performed to identify over and under-utilized families. The 4 statistical approaches were in good agreement when identifying under-utilized families but large families (>393 species) were underemphasized by Binomial, Bayesian and IDM approaches for over-utilization. Residual, Binomial, and IDM analysis identified similar families as over-utilized in the medium (92-392 species) and small (<92 species) classes. The families Apiaceae, Asteraceae, Ericacea, Pinaceae and Salicaceae were identified as significantly over-utilized as ceremonial medicines in medium and large sized families. Analysis of genera within the Apiaceae and Asteraceae suggest that the genus Ligusticum and Artemisia are good candidates for facilitating the discovery of novel psychoactive compounds. The 4 statistical approaches were not consistent in the selection of over-utilization of flora. Residual analysis revealed overall trends that were supported
Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model
NASA Astrophysics Data System (ADS)
Vazifedan, Turaj; Shitan, Mahendran
Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.
Integer Solutions of Binomial Coefficients
ERIC Educational Resources Information Center
Gilbertson, Nicholas J.
2016-01-01
A good formula is like a good story, rich in description, powerful in communication, and eye-opening to readers. The formula presented in this article for determining the coefficients of the binomial expansion of (x + y)n is one such "good read." The beauty of this formula is in its simplicity--both describing a quantitative situation…
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.
Ferrari, Alberto; Comelli, Mario
2016-12-01
In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.
Modeling number of claims and prediction of total claim amount
NASA Astrophysics Data System (ADS)
Acar, Aslıhan Şentürk; Karabey, Uǧur
2017-07-01
In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.
Bennett, Bradley C; Husby, Chad E
2008-03-28
Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.
CROSSER - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.
Modelling parasite aggregation: disentangling statistical and ecological approaches.
Yakob, Laith; Soares Magalhães, Ricardo J; Gray, Darren J; Milinovich, Gabriel; Wardrop, Nicola; Dunning, Rebecca; Barendregt, Jan; Bieri, Franziska; Williams, Gail M; Clements, Archie C A
2014-05-01
The overdispersion in macroparasite infection intensity among host populations is commonly simulated using a constant negative binomial aggregation parameter. We describe an alternative to utilising the negative binomial approach and demonstrate important disparities in intervention efficacy projections that can come about from opting for pattern-fitting models that are not process-explicit. We present model output in the context of the epidemiology and control of soil-transmitted helminths due to the significant public health burden imposed by these parasites, but our methods are applicable to other infections with demonstrable aggregation in parasite numbers among hosts. Copyright © 2014. Published by Elsevier Ltd.
Metaprop: a Stata command to perform meta-analysis of binomial data.
Nyaga, Victoria N; Arbyn, Marc; Aerts, Marc
2014-01-01
Meta-analyses have become an essential tool in synthesizing evidence on clinical and epidemiological questions derived from a multitude of similar studies assessing the particular issue. Appropriate and accessible statistical software is needed to produce the summary statistic of interest. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. It builds further on the existing Stata procedure metan which is typically used to pool effects (risk ratios, odds ratios, differences of risks or means) but which is also used to pool proportions. Metaprop implements procedures which are specific to binomial data and allows computation of exact binomial and score test-based confidence intervals. It provides appropriate methods for dealing with proportions close to or at the margins where the normal approximation procedures often break down, by use of the binomial distribution to model the within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilize the variances. Metaprop was applied on two published meta-analyses: 1) prevalence of HPV-infection in women with a Pap smear showing ASC-US; 2) cure rate after treatment for cervical precancer using cold coagulation. The first meta-analysis showed a pooled HPV-prevalence of 43% (95% CI: 38%-48%). In the second meta-analysis, the pooled percentage of cured women was 94% (95% CI: 86%-97%). By using metaprop, no studies with 0% or 100% proportions were excluded from the meta-analysis. Furthermore, study specific and pooled confidence intervals always were within admissible values, contrary to the original publication, where metan was used.
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Binomial tree method for pricing a regime-switching volatility stock loans
NASA Astrophysics Data System (ADS)
Putri, Endah R. M.; Zamani, Muhammad S.; Utomo, Daryono B.
2018-03-01
Binomial model with regime switching may represents the price of stock loan which follows the stochastic process. Stock loan is one of alternative that appeal investors to get the liquidity without selling the stock. The stock loan mechanism resembles that of American call option when someone can exercise any time during the contract period. From the resembles both of mechanism, determination price of stock loan can be interpreted from the model of American call option. The simulation result shows the behavior of the price of stock loan under a regime-switching with respect to various interest rate and maturity.
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates
Gray, B.R.
2005-01-01
The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively
Negative Urgency, Distress Tolerance, and Substance Abuse Among College Students
Kaiser, Alison J.; Milich, Richard; Lynam, Donald R.; Charnigo, Richard J.
2012-01-01
Objective Negative affect has been consistently linked with substance use/problems in prior research. The present study sought to build upon these findings by exploring how an individual’s characteristic responding to negative affect impacts substance abuse risk. Trait negative affect was examined in relation to substance abuse outcomes along with two variables tapping into response to negative affect: Distress Tolerance, an individual’s perceived ability to tolerate negative affect, and Negative Urgency, the tendency to act rashly while experiencing distress. Method Participants were 525 first-year college students (48.1% male, 81.1% Caucasian), who completed self-report measures assessing personality traits and alcohol-related problems, and a structured interview assessing past and current substance use. Relations were tested using Zero-Inflated Negative Binomial regression models, and each of the personality variables was tested in a model on its own, and in a model where all three traits were accounted for. Results Negative Urgency emerged as the best predictor, relating to every one of the substance use outcome variables even when trait negative affect and Distress Tolerance were accounted for. Conclusions These findings suggest that Negative Urgency is an important factor to consider in developing prevention and intervention efforts aimed at reducing substance use and problems. PMID:22698894
Revealing Word Order: Using Serial Position in Binomials to Predict Properties of the Speaker
ERIC Educational Resources Information Center
Iliev, Rumen; Smirnova, Anastasia
2016-01-01
Three studies test the link between word order in binomials and psychological and demographic characteristics of a speaker. While linguists have already suggested that psychological, cultural and societal factors are important in choosing word order in binomials, the vast majority of relevant research was focused on general factors and on broadly…
NEWTONP - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.
Estimating the Parameters of the Beta-Binomial Distribution.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1979-01-01
For some situations the beta-binomial distribution might be used to describe the marginal distribution of test scores for a particular population of examinees. Several different methods of approximating the maximum likelihood estimate were investigated, and it was found that the Newton-Raphson method should be used when it yields admissable…
Possibility and Challenges of Conversion of Current Virus Species Names to Linnaean Binomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Postler, Thomas S.; Clawson, Anna N.; Amarasinghe, Gaya K.
Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authorsmore » of this article expected when conceiving the experiment. [Arenaviridae; binomials; ICTV; International Committee on Taxonomy of Viruses; Mononegavirales; virus nomenclature; virus taxonomy.]« less
Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data
ERIC Educational Resources Information Center
Bonett, Douglas G.; Price, Robert M.
2012-01-01
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…
NASA Astrophysics Data System (ADS)
Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S.
2018-03-01
A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band.
Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S
2018-03-01
A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band. Copyright © 2018 Elsevier Inc. All rights reserved.
Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means
W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren
1997-01-01
Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...
Spatiotemporal and random parameter panel data models of traffic crash fatalities in Vietnam.
Truong, Long T; Kieu, Le-Minh; Vu, Tuan A
2016-09-01
This paper investigates factors associated with traffic crash fatalities in 63 provinces of Vietnam during the period from 2012 to 2014. Random effect negative binomial (RENB) and random parameter negative binomial (RPNB) panel data models are adopted to consider spatial heterogeneity across provinces. In addition, a spatiotemporal model with conditional autoregressive priors (ST-CAR) is utilised to account for spatiotemporal autocorrelation in the data. The statistical comparison indicates the ST-CAR model outperforms the RENB and RPNB models. Estimation results provide several significant findings. For example, traffic crash fatalities tend to be higher in provinces with greater numbers of level crossings. Passenger distance travelled and road lengths are also positively associated with fatalities. However, hospital densities are negatively associated with fatalities. The safety impact of the national highway 1A, the main transport corridor of the country, is also highlighted. Copyright © 2016 Elsevier Ltd. All rights reserved.
Possibility and Challenges of Conversion of Current Virus Species Names to Linnaean Binomials.
Postler, Thomas S; Clawson, Anna N; Amarasinghe, Gaya K; Basler, Christopher F; Bavari, Sbina; Benko, Mária; Blasdell, Kim R; Briese, Thomas; Buchmeier, Michael J; Bukreyev, Alexander; Calisher, Charles H; Chandran, Kartik; Charrel, Rémi; Clegg, Christopher S; Collins, Peter L; Juan Carlos, De La Torre; Derisi, Joseph L; Dietzgen, Ralf G; Dolnik, Olga; Dürrwald, Ralf; Dye, John M; Easton, Andrew J; Emonet, Sébastian; Formenty, Pierre; Fouchier, Ron A M; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balázs; Hewson, Roger; Horie, Masayuki; Jiang, Dàohóng; Kobinger, Gary; Kondo, Hideki; Kropinski, Andrew M; Krupovic, Mart; Kurath, Gael; Lamb, Robert A; Leroy, Eric M; Lukashevich, Igor S; Maisner, Andrea; Mushegian, Arcady R; Netesov, Sergey V; Nowotny, Norbert; Patterson, Jean L; Payne, Susan L; PaWeska, Janusz T; Peters, Clarence J; Radoshitzky, Sheli R; Rima, Bertus K; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfaçon, Hélène; Salvato, Maria S; Schwemmle, Martin; Smither, Sophie J; Stenglein, Mark D; Stone, David M; Takada, Ayato; Tesh, Robert B; Tomonaga, Keizo; Tordo, Noël; Towner, Jonathan S; Vasilakis, Nikos; Volchkov, Viktor E; Wahl-Jensen, Victoria; Walker, Peter J; Wang, Lin-Fa; Varsani, Arvind; Whitfield, Anna E; Zerbini, F Murilo; Kuhn, Jens H
2017-05-01
Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment. [Arenaviridae; binomials; ICTV; International Committee on Taxonomy of Viruses; Mononegavirales; virus nomenclature; virus taxonomy.]. Published by Oxford University Press on behalf of Society of Systematic Biologists 2016. This work is written by a US Government employee and is in the public domain in the US.
Solar San Diego: The Impact of Binomial Rate Structures on Real PV Systems; Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
VanGeet, O.; Brown, E.; Blair, T.
2008-05-01
There is confusion in the marketplace regarding the impact of solar photovoltaics (PV) on the user's actual electricity bill under California Net Energy Metering, particularly with binomial tariffs (those that include both demand and energy charges) and time-of-use (TOU) rate structures. The City of San Diego has extensive real-time electrical metering on most of its buildings and PV systems, with interval data for overall consumption and PV electrical production available for multiple years. This paper uses 2007 PV-system data from two city facilities to illustrate the impacts of binomial rate designs. The analysis will determine the energy and demand savingsmore » that the PV systems are achieving relative to the absence of systems. A financial analysis of PV-system performance under various rate structures is presented. The data revealed that actual demand and energy use benefits of binomial tariffs increase in summer months, when solar resources allow for maximized electricity production. In a binomial tariff system, varying on- and semi-peak times can result in approximately $1,100 change in demand charges per month over not having a PV system in place, an approximate 30% cost savings. The PV systems are also shown to have a 30%-50% reduction in facility energy charges in 2007.« less
An examination of sources of sensitivity of consumer surplus estimates in travel cost models.
Blaine, Thomas W; Lichtkoppler, Frank R; Bader, Timothy J; Hartman, Travis J; Lucente, Joseph E
2015-03-15
We examine sensitivity of estimates of recreation demand using the Travel Cost Method (TCM) to four factors. Three of the four have been routinely and widely discussed in the TCM literature: a) Poisson verses negative binomial regression; b) application of Englin correction to account for endogenous stratification; c) truncation of the data set to eliminate outliers. A fourth issue we address has not been widely modeled: the potential effect on recreation demand of the interaction between income and travel cost. We provide a straightforward comparison of all four factors, analyzing the impact of each on regression parameters and consumer surplus estimates. Truncation has a modest effect on estimates obtained from the Poisson models but a radical effect on the estimates obtained by way of the negative binomial. Inclusion of an income-travel cost interaction term generally produces a more conservative but not a statistically significantly different estimate of consumer surplus in both Poisson and negative binomial models. It also generates broader confidence intervals. Application of truncation, the Englin correction and the income-travel cost interaction produced the most conservative estimates of consumer surplus and eliminated the statistical difference between the Poisson and the negative binomial. Use of the income-travel cost interaction term reveals that for visitors who face relatively low travel costs, the relationship between income and travel demand is negative, while it is positive for those who face high travel costs. This provides an explanation of the ambiguities on the findings regarding the role of income widely observed in the TCM literature. Our results suggest that policies that reduce access to publicly owned resources inordinately impact local low income recreationists and are contrary to environmental justice. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zero-state Markov switching count-data models: an empirical assessment.
Malyshkina, Nataliya V; Mannering, Fred L
2010-01-01
In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.
Preisser, John S; Long, D Leann; Stamm, John W
2017-01-01
Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.
Preisser, John S.; Long, D. Leann; Stamm, John W.
2017-01-01
Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962
NASA Astrophysics Data System (ADS)
Leier, André; Marquez-Lago, Tatiana T.; Burrage, Kevin
2008-05-01
The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol. 2, 117(E) (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ-DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.
Hosseinpour, Mehdi; Pour, Mehdi Hossein; Prasetijo, Joewono; Yahaya, Ahmad Shukri; Ghadiri, Seyed Mohammad Reza
2013-01-01
The objective of this study was to examine the effects of various roadway characteristics on the incidence of pedestrian-vehicle crashes by developing a set of crash prediction models on 543 km of Malaysia federal roads over a 4-year time span between 2007 and 2010. Four count models including the Poisson, negative binomial (NB), hurdle Poisson (HP), and hurdle negative binomial (HNB) models were developed and compared to model the number of pedestrian crashes. The results indicated the presence of overdispersion in the pedestrian crashes (PCs) and showed that it is due to excess zero rather than variability in the crash data. To handle the issue, the hurdle Poisson model was found to be the best model among the considered models in terms of comparative measures. Moreover, the variables average daily traffic, heavy vehicle traffic, speed limit, land use, and area type were significantly associated with PCs.
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Possibility and challenges of conversion of current virus species names to Linnaean binomials
Thomas, Postler; Clawson, Anna N.; Amarasinghe, Gaya K.; Basler, Christopher F.; Bavari, Sina; Benko, Maria; Blasdell, Kim R.; Briese, Thomas; Buchmeier, Michael J.; Bukreyev, Alexander; Calisher, Charles H.; Chandran, Kartik; Charrel, Remi; Clegg, Christopher S.; Collins, Peter L.; De la Torre, Juan Carlos; DeRisi, Joseph L.; Dietzgen, Ralf G.; Dolnik, Olga; Durrwald, Ralf; Dye, John M.; Easton, Andrew J.; Emonet, Sebastian; Formenty, Pierre; Fouchier, Ron A. M.; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balazs; Hewson, Roger; Horie, Masayuki; Jiang, Daohong; Kobinger, Gary P.; Kondo, Hideki; Kropinski, Andrew; Krupovic, Mart; Kurath, Gael; Lamb, Robert A.; Leroy, Eric M.; Lukashevich, Igor S.; Maisner, Andrea; Mushegian, Arcady; Netesov, Sergey V.; Nowotny, Norbert; Patterson, Jean L.; Payne, Susan L.; Paweska, Janusz T.; Peters, C.J.; Radoshitzky, Sheli; Rima, Bertus K.; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfacon, Helene; Salvato , Maria; Schwemmle, Martin; Smither, Sophie J.; Stenglein, Mark; Stone, D.M.; Takada , Ayato; Tesh, Robert B.; Tomonaga, Keizo; Tordo, N.; Towner, Jonathan S.; Vasilakis, Nikos; Volchkov, Victor E.; Jensen, Victoria; Walker, Peter J.; Wang, Lin-Fa; Varsani, Arvind; Whitfield , Anna E.; Zerbini, Francisco Murilo; Kuhn, Jens H.
2017-01-01
Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment.
Possibility and Challenges of Conversion of Current Virus Species Names to Linnaean Binomials
Postler, Thomas S.; Clawson, Anna N.; Amarasinghe, Gaya K.; Basler, Christopher F.; Bavari, Sbina; Benkő, Mária; Blasdell, Kim R.; Briese, Thomas; Buchmeier, Michael J.; Bukreyev, Alexander; Calisher, Charles H.; Chandran, Kartik; Charrel, Rémi; Clegg, Christopher S.; Collins, Peter L.; Juan Carlos, De La Torre; Derisi, Joseph L.; Dietzgen, Ralf G.; Dolnik, Olga; Dürrwald, Ralf; Dye, John M.; Easton, Andrew J.; Emonet, Sébastian; Formenty, Pierre; Fouchier, Ron A. M.; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balázs; Hewson, Roger; Horie, Masayuki; Jiāng, Dàohóng; Kobinger, Gary; Kondo, Hideki; Kropinski, Andrew M.; Krupovic, Mart; Kurath, Gael; Lamb, Robert A.; Leroy, Eric M.; Lukashevich, Igor S.; Maisner, Andrea; Mushegian, Arcady R.; Netesov, Sergey V.; Nowotny, Norbert; Patterson, Jean L.; Payne, Susan L.; PaWeska, Janusz T.; Peters, Clarence J.; Radoshitzky, Sheli R.; Rima, Bertus K.; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfaçon, Hélène; Salvato, Maria S.; Schwemmle, Martin; Smither, Sophie J.; Stenglein, Mark D.; Stone, David M.; Takada, Ayato; Tesh, Robert B.; Tomonaga, Keizo; Tordo, Noël; Towner, Jonathan S.; Vasilakis, Nikos; Volchkov, Viktor E.; Wahl-Jensen, Victoria; Walker, Peter J.; Wang, Lin-Fa; Varsani, Arvind; Whitfield, Anna E.; Zerbini, F. Murilo; Kuhn, Jens H.
2017-01-01
Abstract Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment. PMID:27798405
NASA Astrophysics Data System (ADS)
Hilpert, Markus; Rasmuson, Anna; Johnson, William P.
2017-07-01
Colloid transport in saturated porous media is significantly influenced by colloidal interactions with grain surfaces. Near-surface fluid domain colloids experience relatively low fluid drag and relatively strong colloidal forces that slow their downgradient translation relative to colloids in bulk fluid. Near-surface fluid domain colloids may reenter into the bulk fluid via diffusion (nanoparticles) or expulsion at rear flow stagnation zones, they may immobilize (attach) via primary minimum interactions, or they may move along a grain-to-grain contact to the near-surface fluid domain of an adjacent grain. We introduce a simple model that accounts for all possible permutations of mass transfer within a dual pore and grain network. The primary phenomena thereby represented in the model are mass transfer of colloids between the bulk and near-surface fluid domains and immobilization. Colloid movement is described by a Markov chain, i.e., a sequence of trials in a 1-D network of unit cells, which contain a pore and a grain. Using combinatorial analysis, which utilizes the binomial coefficient, we derive the residence time distribution, i.e., an inventory of the discrete colloid travel times through the network and of their probabilities to occur. To parameterize the network model, we performed mechanistic pore-scale simulations in a single unit cell that determined the likelihoods and timescales associated with the above colloid mass transfer processes. We found that intergrain transport of colloids in the near-surface fluid domain can cause extended tailing, which has traditionally been attributed to hydrodynamic dispersion emanating from flow tortuosity of solute trajectories.
Binomial Coefficients Modulo a Prime--A Visualization Approach to Undergraduate Research
ERIC Educational Resources Information Center
Bardzell, Michael; Poimenidou, Eirini
2011-01-01
In this article we present, as a case study, results of undergraduate research involving binomial coefficients modulo a prime "p." We will discuss how undergraduates were involved in the project, even with a minimal mathematical background beforehand. There are two main avenues of exploration described to discover these binomial…
Using the Binomial Series to Prove the Arithmetic Mean-Geometric Mean Inequality
ERIC Educational Resources Information Center
Persky, Ronald L.
2003-01-01
In 1968, Leon Gerber compared (1 + x)[superscript a] to its kth partial sum as a binomial series. His result is stated and, as an application of this result, a proof of the arithmetic mean-geometric mean inequality is presented.
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions
ERIC Educational Resources Information Center
Desjardins, Christopher David
2016-01-01
The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…
Raw and Central Moments of Binomial Random Variables via Stirling Numbers
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…
NASA Astrophysics Data System (ADS)
Skel'chik, V. S.; Ryabov, V. M.
1996-11-01
On the basis of the classical theory of thin anisotropic laminated plates the article analyzes the free vibrations of rectangular cantilever plates made of fibrous composites. The application of Kantorovich's method for the binomial representation of the shape of the elastic surface of a plate yielded for two unknown functions a system of two connected differential equations and the corresponding boundary conditions at the place of constraint and at the free edge. The exact solution for the frequencies and forms of the free vibrations was found with the use of Laplace transformation with respect to the space variable. The magnitudes of several first dimensionless frequencies of the bending and torsional vibrations of the plate were calculated for a wide range of change of two dimensionless complexes, with the dimensions of the plate and the anisotropy of the elastic properties of the material taken into account. The article shows that with torsional vibrations the warping constraint at the fixed end explains the apparent dependence of the shear modulus of the composite on the length of the specimen that had been discovered earlier on in experiments with a torsional pendulum. It examines the interaction and transformation of the second bending mode and of the first torsional mode of the vibrations. It analyzes the asymptotics of the dimensionless frequencies when the length of the plate is increased, and it shows that taking into account the bending-torsion interaction in strongly anisotropic materials type unidirectional carbon reinforced plastic can reduce substantially the frequencies of the bending vibrations but has no effect (within the framework of the binomial model) on the frequencies of the torsional vibrations.
Solar San Diego: The Impact of Binomial Rate Structures on Real PV-Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Geet, O.; Brown, E.; Blair, T.
2008-01-01
There is confusion in the marketplace regarding the impact of solar photovoltaics (PV) on the user's actual electricity bill under California Net Energy Metering, particularly with binomial tariffs (those that include both demand and energy charges) and time-of-use (TOU) rate structures. The City of San Diego has extensive real-time electrical metering on most of its buildings and PV systems, with interval data for overall consumption and PV electrical production available for multiple years. This paper uses 2007 PV-system data from two city facilities to illustrate the impacts of binomial rate designs. The analysis will determine the energy and demand savingsmore » that the PV systems are achieving relative to the absence of systems. A financial analysis of PV-system performance under various rates structures is presented. The data revealed that actual demand and energy use benefits of bionomial tariffs increase in summer months, when solar resources allow for maximized electricity production. In a binomial tariff system, varying on- and semi-peak times can result in approximately $1,100 change in demand charges per month over not having a PV system in place, an approximate 30% cost savings. The PV systems are also shown to have a 30%-50% reduction in facility energy charges in 2007. Future work will include combining demand and electricity charges and increasing the breadth of rate structures tested, including the impacts of non-coincident demand charges.« less
Varona, Luis; Sorensen, Daniel
2014-01-01
This work presents a model for the joint analysis of a binomial and a Gaussian trait using a recursive parametrization that leads to a computationally efficient implementation. The model is illustrated in an analysis of mortality and litter size in two breeds of Danish pigs, Landrace and Yorkshire. Available evidence suggests that mortality of piglets increased partly as a result of successful selection for total number of piglets born. In recent years there has been a need to decrease the incidence of mortality in pig-breeding programs. We report estimates of genetic variation at the level of the logit of the probability of mortality and quantify how it is affected by the size of the litter. Several models for mortality are considered and the best fits are obtained by postulating linear and cubic relationships between the logit of the probability of mortality and litter size, for Landrace and Yorkshire, respectively. An interpretation of how the presence of genetic variation affects the probability of mortality in the population is provided and we discuss and quantify the prospects of selecting for reduced mortality, without affecting litter size. PMID:24414548
Kabaluk, J Todd; Binns, Michael R; Vernon, Robert S
2006-06-01
Counts of green peach aphid, Myzus persicae (Sulzer) (Hemiptera: Aphididae), in potato, Solanum tuberosum L., fields were used to evaluate the performance of the sampling plan from a pest management company. The counts were further used to develop a binomial sampling method, and both full count and binomial plans were evaluated using operating characteristic curves. Taylor's power law provided a good fit of the data (r2 = 0.95), with the relationship between the variance (s2) and mean (m) as ln(s2) = 1.81(+/- 0.02) + 1.55(+/- 0.01) ln(m). A binomial sampling method was developed using the empirical model ln(m) = c + dln(-ln(1 - P(T))), to which the data fit well for tally numbers (T) of 0, 1, 3, 5, 7, and 10. Although T = 3 was considered the most reasonable given its operating characteristics and presumed ease of classification above or below critical densities (i.e., action thresholds) of one and 10 M. persicae per leaf, the full count method is shown to be superior. The mean number of sample sites per field visit by the pest management company was 42 +/- 19, with more than one-half (54%) of the field visits involving sampling 31-50 sample sites, which was acceptable in the context of operating characteristic curves for a critical density of 10 M. persicae per leaf. Based on operating characteristics, actual sample sizes used by the pest management company can be reduced by at least 50%, on average, for a critical density of 10 M. persicae per leaf. For a critical density of one M. persicae per leaf used to avert the spread of potato leaf roll virus, sample sizes from 50 to 100 were considered more suitable.
NASA Astrophysics Data System (ADS)
Hilpert, Markus; Johnson, William P.
2018-01-01
We used a recently developed simple mathematical network model to upscale pore-scale colloid transport information determined under unfavorable attachment conditions. Classical log-linear and nonmonotonic retention profiles, both well-reported under favorable and unfavorable attachment conditions, respectively, emerged from our upscaling. The primary attribute of the network is colloid transfer between bulk pore fluid, the near-surface fluid domain (NSFD), and attachment (treated as irreversible). The network model accounts for colloid transfer to the NSFD of downgradient grains and for reentrainment to bulk pore fluid via diffusion or via expulsion at rear flow stagnation zones (RFSZs). The model describes colloid transport by a sequence of random trials in a one-dimensional (1-D) network of Happel cells, which contain a grain and a pore. Using combinatorial analysis that capitalizes on the binomial coefficient, we derived from the pore-scale information the theoretical residence time distribution of colloids in the network. The transition from log-linear to nonmonotonic retention profiles occurs when the conditions underlying classical filtration theory are not fulfilled, i.e., when an NSFD colloid population is maintained. Then, nonmonotonic retention profiles result potentially both for attached and NSFD colloids. The concentration maxima shift downgradient depending on specific parameter choice. The concentration maxima were also shown to shift downgradient temporally (with continued elution) under conditions where attachment is negligible, explaining experimentally observed downgradient transport of retained concentration maxima of adhesion-deficient bacteria. For the case of zero reentrainment, we develop closed-form, analytical expressions for the shape, and the maximum of the colloid retention profile.
Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S
2016-12-01
Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.
ERIC Educational Resources Information Center
Liou, Pey-Yan
2009-01-01
The current study examines three regression models: OLS (ordinary least square) linear regression, Poisson regression, and negative binomial regression for analyzing count data. Simulation results show that the OLS regression model performed better than the others, since it did not produce more false statistically significant relationships than…
Adams, Rachel Sayko; Larson, Mary Jo; Corrigan, John D.; Ritter, Grant A.; Williams, Thomas V.
2013-01-01
This study used the 2008 Department of Defense Survey of Health Related Behaviors among Active Duty Military Personnel to determine whether traumatic brain injury (TBI) is associated with past year drinking-related consequences. The study sample included currently-drinking personnel who had a combat deployment in the past year and were home for ≥6 months (N = 3,350). Negative binomial regression models were used to assess the incidence rate ratios of consequences, by TBI-level. Experiencing a TBI with a loss of consciousness >20 minutes was significantly associated with consequences independent of demographics, combat exposure, posttraumatic stress disorder, and binge drinking. The study’s limitations are noted. PMID:23869456
Estimating safety effects of pavement management factors utilizing Bayesian random effect models.
Jiang, Ximiao; Huang, Baoshan; Zaretzki, Russell L; Richards, Stephen; Yan, Xuedong
2013-01-01
Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic
Statistical models for RNA-seq data derived from a two-condition 48-replicate experiment.
Gierliński, Marek; Cole, Christian; Schofield, Pietà; Schurch, Nicholas J; Sherstnev, Alexander; Singh, Vijender; Wrobel, Nicola; Gharbi, Karim; Simpson, Gordon; Owen-Hughes, Tom; Blaxter, Mark; Barton, Geoffrey J
2015-11-15
High-throughput RNA sequencing (RNA-seq) is now the standard method to determine differential gene expression. Identifying differentially expressed genes crucially depends on estimates of read-count variability. These estimates are typically based on statistical models such as the negative binomial distribution, which is employed by the tools edgeR, DESeq and cuffdiff. Until now, the validity of these models has usually been tested on either low-replicate RNA-seq data or simulations. A 48-replicate RNA-seq experiment in yeast was performed and data tested against theoretical models. The observed gene read counts were consistent with both log-normal and negative binomial distributions, while the mean-variance relation followed the line of constant dispersion parameter of ∼0.01. The high-replicate data also allowed for strict quality control and screening of 'bad' replicates, which can drastically affect the gene read-count distribution. RNA-seq data have been submitted to ENA archive with project ID PRJEB5348. g.j.barton@dundee.ac.uk. © The Author 2015. Published by Oxford University Press.
Statistical models for RNA-seq data derived from a two-condition 48-replicate experiment
Cole, Christian; Schofield, Pietà; Schurch, Nicholas J.; Sherstnev, Alexander; Singh, Vijender; Wrobel, Nicola; Gharbi, Karim; Simpson, Gordon; Owen-Hughes, Tom; Blaxter, Mark; Barton, Geoffrey J.
2015-01-01
Motivation: High-throughput RNA sequencing (RNA-seq) is now the standard method to determine differential gene expression. Identifying differentially expressed genes crucially depends on estimates of read-count variability. These estimates are typically based on statistical models such as the negative binomial distribution, which is employed by the tools edgeR, DESeq and cuffdiff. Until now, the validity of these models has usually been tested on either low-replicate RNA-seq data or simulations. Results: A 48-replicate RNA-seq experiment in yeast was performed and data tested against theoretical models. The observed gene read counts were consistent with both log-normal and negative binomial distributions, while the mean-variance relation followed the line of constant dispersion parameter of ∼0.01. The high-replicate data also allowed for strict quality control and screening of ‘bad’ replicates, which can drastically affect the gene read-count distribution. Availability and implementation: RNA-seq data have been submitted to ENA archive with project ID PRJEB5348. Contact: g.j.barton@dundee.ac.uk PMID:26206307
A binomial stochastic kinetic approach to the Michaelis-Menten mechanism
NASA Astrophysics Data System (ADS)
Lente, Gábor
2013-05-01
This Letter presents a new method that gives an analytical approximation of the exact solution of the stochastic Michaelis-Menten mechanism without computationally demanding matrix operations. The method is based on solving the deterministic rate equations and then using the results as guiding variables of calculating probability values using binomial distributions. This principle can be generalized to a number of different kinetic schemes and is expected to be very useful in the evaluation of measurements focusing on the catalytic activity of one or a few individual enzyme molecules.
Galvan, T L; Burkness, E C; Hutchison, W D
2007-06-01
To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.
Modeling left-turn crash occurrence at signalized intersections by conflicting patterns.
Wang, Xuesong; Abdel-Aty, Mohamed
2008-01-01
In order to better understand the underlying crash mechanisms, left-turn crashes occurring at 197 four-legged signalized intersections over 6 years were classified into nine patterns based on vehicle maneuvers and then were assigned to intersection approaches. Crash frequency of each pattern was modeled at the approach level by mainly using Generalized Estimating Equations (GEE) with the Negative Binomial as the link function to account for the correlation among the crash data. GEE with a binomial logit link function was also applied for patterns with fewer crashes. The Cumulative Residuals test shows that, for correlated left-turn crashes, GEE models usually outperformed basic Negative Binomial models. The estimation results show that there are obvious differences in the factors that cause the occurrence of different left-turn collision patterns. For example, for each pattern, the traffic flows to which the colliding vehicles belong are identified to be significant. The width of the crossing distance (represented by the number of through lanes on the opposing approach of the left-turning traffic) is associated with more left-turn traffic colliding with opposing through traffic (Pattern 5), but with less left-turning traffic colliding with near-side crossing through traffic (Pattern 8). The safety effectiveness of the left-turning signal is not consistent for different crash patterns; "protected" phasing is correlated with fewer Pattern 5 crashes, but with more Pattern 8 crashes. The study indicates that in order to develop efficient countermeasures for left-turn crashes and improve safety at signalized intersections, left-turn crashes should be considered in different patterns.
Extending the Binomial Checkpointing Technique for Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walther, Andrea; Narayanan, Sri Hari Krishna
In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, re- quired, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algo- rithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massivemore » parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We de- scribe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding imple- mentation and discuss numerical results.« less
Park, Byung-Jung; Lord, Dominique; Wu, Lingtao
2016-10-28
This study aimed to investigate the relative performance of two models (negative binomial (NB) model and two-component finite mixture of negative binomial models (FMNB-2)) in terms of developing crash modification factors (CMFs). Crash data on rural multilane divided highways in California and Texas were modeled with the two models, and crash modification functions (CMFunctions) were derived. The resultant CMFunction estimated from the FMNB-2 model showed several good properties over that from the NB model. First, the safety effect of a covariate was better reflected by the CMFunction developed using the FMNB-2 model, since the model takes into account the differential responsiveness of crash frequency to the covariate. Second, the CMFunction derived from the FMNB-2 model is able to capture nonlinear relationships between covariate and safety. Finally, following the same concept as those for NB models, the combined CMFs of multiple treatments were estimated using the FMNB-2 model. The results indicated that they are not the simple multiplicative of single ones (i.e., their safety effects are not independent under FMNB-2 models). Adjustment Factors (AFs) were then developed. It is revealed that current Highway Safety Manual's method could over- or under-estimate the combined CMFs under particular combination of covariates. Safety analysts are encouraged to consider using the FMNB-2 models for developing CMFs and AFs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Predictive accuracy of particle filtering in dynamic models supporting outbreak projections.
Safarishahrbijari, Anahita; Teyhouee, Aydin; Waldner, Cheryl; Liu, Juxin; Osgood, Nathaniel D
2017-09-26
While a new generation of computational statistics algorithms and availability of data streams raises the potential for recurrently regrounding dynamic models with incoming observations, the effectiveness of such arrangements can be highly subject to specifics of the configuration (e.g., frequency of sampling and representation of behaviour change), and there has been little attempt to identify effective configurations. Combining dynamic models with particle filtering, we explored a solution focusing on creating quickly formulated models regrounded automatically and recurrently as new data becomes available. Given a latent underlying case count, we assumed that observed incident case counts followed a negative binomial distribution. In accordance with the condensation algorithm, each such observation led to updating of particle weights. We evaluated the effectiveness of various particle filtering configurations against each other and against an approach without particle filtering according to the accuracy of the model in predicting future prevalence, given data to a certain point and a norm-based discrepancy metric. We examined the effectiveness of particle filtering under varying times between observations, negative binomial dispersion parameters, and rates with which the contact rate could evolve. We observed that more frequent observations of empirical data yielded super-linearly improved accuracy in model predictions. We further found that for the data studied here, the most favourable assumptions to make regarding the parameters associated with the negative binomial distribution and changes in contact rate were robust across observation frequency and the observation point in the outbreak. Combining dynamic models with particle filtering can perform well in projecting future evolution of an outbreak. Most importantly, the remarkable improvements in predictive accuracy resulting from more frequent sampling suggest that investments to achieve efficient reporting
Accident prediction model for public highway-rail grade crossings.
Lu, Pan; Tolliver, Denver
2016-05-01
Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger
2017-09-01
The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
Modeling the stepping mechanism in negative lightning leaders
NASA Astrophysics Data System (ADS)
Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir
2017-04-01
It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
Multilevel Models for Binary Data
ERIC Educational Resources Information Center
Powers, Daniel A.
2012-01-01
The methods and models for categorical data analysis cover considerable ground, ranging from regression-type models for binary and binomial data, count data, to ordered and unordered polytomous variables, as well as regression models that mix qualitative and continuous data. This article focuses on methods for binary or binomial data, which are…
Covering Resilience: A Recent Development for Binomial Checkpointing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walther, Andrea; Narayanan, Sri Hari Krishna
In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, required, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algorithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulationsmore » and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We describe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding implementation and discuss first numerical results.« less
Diwan, Sadhna; Jonnalagadda, Satya S; Balaswamy, Shantha
2004-10-01
Using the life stress model of psychological well-being, in this study we examined risks and resources predicting the occurrence of both positive and negative affect among older Asian Indian immigrants who experienced stressful life events. We collected data through a telephone survey of 226 respondents (aged 50 years and older) in the Southeastern United States. We used hierarchical, negative binomial regression analyses to examine correlates of positive and negative affect. Different coping resources influenced positive and negative affect when stressful life events were controlled for. Being female was a common risk factor for poorer positive and increased negative affect. Satisfaction with friendships and a cultural or ethnic identity that is either bicultural or more American were predictive of greater positive affect. Greater religiosity and increased mastery were resources predicting less negative affect. Cognitive and structural interventions that increase opportunities for social integration, increasing mastery, and addressing spiritual concerns are discussed as ways of coping with stress to improve the well-being of individuals in this immigrant community.
NASA Astrophysics Data System (ADS)
Rajakaruna, Harshana; VandenByllaardt, Julie; Kydd, Jocelyn; Bailey, Sarah
2018-03-01
The International Maritime Organization (IMO) has set limits on allowable plankton concentrations in ballast water discharge to minimize aquatic invasions globally. Previous guidance on ballast water sampling and compliance decision thresholds was based on the assumption that probability distributions of plankton are Poisson when spatially homogenous, or negative binomial when heterogeneous. We propose a hierarchical probability model, which incorporates distributions at the level of particles (i.e., discrete individuals plus colonies per unit volume) and also within particles (i.e., individuals per particle) to estimate the average plankton concentration in ballast water. We examined the performance of the models using data for plankton in the size class ≥ 10 μm and < 50 μm, collected from five different depths of a ballast tank of a commercial ship in three independent surveys. We show that the data fit to the negative binomial and the hierarchical probability models equally well, with both models performing better than the Poisson model at the scale of our sampling. The hierarchical probability model, which accounts for both the individuals and the colonies in a sample, reduces the uncertainty associated with the concentration estimation, and improves the power of rejecting the decision on ship's compliance when a ship does not truly comply with the standard. We show examples of how to test ballast water compliance using the above models.
Sigma models with negative curvature
Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.
2016-03-16
Here, we construct Higgs Effective Field Theory (HEFT) based on the scalar manifold Hn, which is a hyperbolic space of constant negative curvature. The Lagrangian has a non-compact O(n, 1) global symmetry group, but it gives a unitary theory as long as only a compact subgroup of the global symmetry is gauged. Whether the HEFT manifold has positive or negative curvature can be tested by measuring the S-parameter, and the cross sections for longitudinal gauge boson and Higgs boson scattering, since the curvature (including its sign) determines deviations from Standard Model values.
Distribution pattern of public transport passenger in Yogyakarta, Indonesia
NASA Astrophysics Data System (ADS)
Narendra, Alfa; Malkhamah, Siti; Sopha, Bertha Maya
2018-03-01
The arrival and departure distribution pattern of Trans Jogja bus passenger is one of the fundamental model for simulation. The purpose of this paper is to build models of passengers flows. This research used passengers data from January to May 2014. There is no policy that change the operation system affecting the nature of this pattern nowadays. The roads, buses, land uses, schedule, and people are relatively still the same. The data then categorized based on the direction, days, and location. Moreover, each category was fitted into some well-known discrete distributions. Those distributions are compared based on its AIC value and BIC. The chosen distribution model has the smallest AIC and BIC value and the negative binomial distribution found has the smallest AIC and BIC value. Probability mass function (PMF) plots of those models were compared to draw generic model from each categorical negative binomial distribution models. The value of accepted generic negative binomial distribution is 0.7064 and 1.4504 of mu. The minimum and maximum passenger vector value of distribution are is 0 and 41.
Hosseinpour, Mehdi; Yahaya, Ahmad Shukri; Sadullah, Ahmad Farhan
2014-01-01
Head-on crashes are among the most severe collision types and of great concern to road safety authorities. Therefore, it justifies more efforts to reduce both the frequency and severity of this collision type. To this end, it is necessary to first identify factors associating with the crash occurrence. This can be done by developing crash prediction models that relate crash outcomes to a set of contributing factors. This study intends to identify the factors affecting both the frequency and severity of head-on crashes that occurred on 448 segments of five federal roads in Malaysia. Data on road characteristics and crash history were collected on the study segments during a 4-year period between 2007 and 2010. The frequency of head-on crashes were fitted by developing and comparing seven count-data models including Poisson, standard negative binomial (NB), random-effect negative binomial, hurdle Poisson, hurdle negative binomial, zero-inflated Poisson, and zero-inflated negative binomial models. To model crash severity, a random-effect generalized ordered probit model (REGOPM) was used given a head-on crash had occurred. With respect to the crash frequency, the random-effect negative binomial (RENB) model was found to outperform the other models according to goodness of fit measures. Based on the results of the model, the variables horizontal curvature, terrain type, heavy-vehicle traffic, and access points were found to be positively related to the frequency of head-on crashes, while posted speed limit and shoulder width decreased the crash frequency. With regard to the crash severity, the results of REGOPM showed that horizontal curvature, paved shoulder width, terrain type, and side friction were associated with more severe crashes, whereas land use, access points, and presence of median reduced the probability of severe crashes. Based on the results of this study, some potential countermeasures were proposed to minimize the risk of head-on crashes. Copyright
Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.
2014-01-01
Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are
Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bayesian network model of crowd emotion and negative behavior
NASA Astrophysics Data System (ADS)
Ramli, Nurulhuda; Ghani, Noraida Abdul; Hatta, Zulkarnain Ahmad; Hashim, Intan Hashimah Mohd; Sulong, Jasni; Mahudin, Nor Diana Mohd; Rahman, Shukran Abd; Saad, Zarina Mat
2014-12-01
The effects of overcrowding have become a major concern for event organizers. One aspect of this concern has been the idea that overcrowding can enhance the occurrence of serious incidents during events. As one of the largest Muslim religious gathering attended by pilgrims from all over the world, Hajj has become extremely overcrowded with many incidents being reported. The purpose of this study is to analyze the nature of human emotion and negative behavior resulting from overcrowding during Hajj events from data gathered in Malaysian Hajj Experience Survey in 2013. The sample comprised of 147 Malaysian pilgrims (70 males and 77 females). Utilizing a probabilistic model called Bayesian network, this paper models the dependence structure between different emotions and negative behaviors of pilgrims in the crowd. The model included the following variables of emotion: negative, negative comfortable, positive, positive comfortable and positive spiritual and variables of negative behaviors; aggressive and hazardous acts. The study demonstrated that emotions of negative, negative comfortable, positive spiritual and positive emotion have a direct influence on aggressive behavior whereas emotion of negative comfortable, positive spiritual and positive have a direct influence on hazardous acts behavior. The sensitivity analysis showed that a low level of negative and negative comfortable emotions leads to a lower level of aggressive and hazardous behavior. Findings of the study can be further improved to identify the exact cause and risk factors of crowd-related incidents in preventing crowd disasters during the mass gathering events.
NASA Astrophysics Data System (ADS)
Barle, Stanko
In this dissertation, two dynamical systems with many degrees of freedom are analyzed. One is the system of highly correlated electrons in the two-impurity Kondo problem. The other deals with building a realistic model of diffusion underlying financial markets. The simplest mean-field theory capable of mimicking the non-Fermi liquid behavior of the critical point in the two-impurity Kondo problem is presented. In this approach Landau's adiabaticity assumption--of a one-to-one correspondence between the low-energy excitations of the interacting and noninteracting systems--is violated through the presence of decoupled local degrees of freedom. These do not couple directly to external fields but appear indirectly in the physical properties leading, for example, to the log(T, omega) behavior of the staggered magnetic susceptibility. Also, as observed previously, the correlation function <{bf S}_1 cdot{bf S}_2> = -1/4 is a consequence of the equal weights of the singlet and triplet impurity configurations at the critical point. In the second problem, a numerical model is developed to describe the diffusion of prices in the market. Implied binomial (or multinomial) trees are constructed to enable practical pricing of derivative securities in consistency with the existing market. The method developed here is capable of accounting for both the strike price and term structure of the implied volatility. It includes the correct treatment of interest rate and dividends which proves robust even if these quantities are unusually large. The method is explained both as a set of individual innovations and, from a different prospective, as a consequence of a single plausible transformation from the tree of spot prices to the tree of futures prices.
A statistical model to estimate the impact of a hepatitis A vaccination programme.
Oviedo, Manuel; Pilar Muñoz, M; Domínguez, Angela; Borras, Eva; Carmona, Gloria
2008-11-11
A program of routine hepatitis A+B vaccination in preadolescents was introduced in 1998 in Catalonia, a region situated in the northeast of Spain. The objective of this study was to quantify the reduction in the incidence of hepatitis A in order to differentiate the natural reduction of the incidence of hepatitis A from that produced due to the vaccination programme and to predict the evolution of the disease in forthcoming years. A generalized linear model (GLM) using negative binomial regression was used to estimate the incidence rates of hepatitis A in Catalonia by year, age group and vaccination. Introduction of the vaccine reduced cases by 5.5 by year (p-value<0.001), but there was a significant interaction between the year of report and vaccination that smoothed this reduction (p-value<0.001). The reduction was not equal in all age groups, being greater in the 12-18 years age group, which fell from a mean rate of 8.15 per 100,000 person/years in the pre-vaccination period (1992-1998) to 1.4 in the vaccination period (1999-2005). The model predicts the evolution accurately for the group of vaccinated subjects. Negative binomial regression is more appropriate than Poisson regression when observed variance exceeds the observed mean (overdispersed count data), can cause a variable apparently contribute more on the model of what really makes it.
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei
2014-01-01
The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.
Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M
2015-10-30
Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.
The magnetisation distribution of the Ising model - a new approach
NASA Astrophysics Data System (ADS)
Hakan Lundow, Per; Rosengren, Anders
2010-03-01
A completely new approach to the Ising model in 1 to 5 dimensions is developed. We employ a generalisation of the binomial coefficients to describe the magnetisation distributions of the Ising model. For the complete graph this distribution is exact. For simple lattices of dimensions d=1 and d=5 the magnetisation distributions are remarkably well-fitted by the generalized binomial distributions. For d=4 we are only slightly less successful, while for d=2,3 we see some deviations (with exceptions!) between the generalized binomial and the Ising distribution. The results speak in favour of the generalized binomial distribution's correctness regarding their general behaviour in comparison to the Ising model. A theoretical analysis of the distribution's moments also lends support their being correct asymptotically, including the logarithmic corrections in d=4. The full extent to which they correctly model the Ising distribution, and for which graph families, is not settled though.
Jang, Seon-Kyeong; Choi, Hye-Im; Park, Soohyun; Jaekal, Eunju; Lee, Ga-Young; Cho, Young Il; Choi, Kee-Hong
2016-01-01
Acknowledging separable factors underlying negative symptoms may lead to better understanding and treatment of negative symptoms in individuals with schizophrenia. The current study aimed to test whether the negative symptoms factor (NSF) of the Positive and Negative Syndrome Scale (PANSS) would be better represented by expressive and experiential deficit factors, rather than by a single factor model, using confirmatory factor analysis (CFA). Two hundred and twenty individuals with schizophrenia spectrum disorders completed the PANSS; subsamples additionally completed the Brief Negative Symptom Scale (BNSS) and the Motivation and Pleasure Scale-Self-Report (MAP-SR). CFA results indicated that the two-factor model fit the data better than the one-factor model; however, latent variables were closely correlated. The two-factor model's fit was significantly improved by accounting for correlated residuals between N2 (emotional withdrawal) and N6 (lack of spontaneity and flow of conversation), and between N4 (passive social withdrawal) and G16 (active social avoidance), possibly reflecting common method variance. The two NSF factors exhibited differential patterns of correlation with subdomains of the BNSS and MAP-SR. These results suggest that the PANSS NSF would be better represented by a two-factor model than by a single-factor one, and support the two-factor model's adequate criterion-related validity. Common method variance among several items may be a potential source of measurement error under a two-factor model of the PANSS NSF.
Phase transition and information cascade in a voting model
NASA Astrophysics Data System (ADS)
Hisakado, M.; Mori, S.
2010-08-01
In this paper, we introduce a voting model that is similar to a Keynesian beauty contest and analyse it from a mathematical point of view. There are two types of voters—copycat and independent—and two candidates. Our voting model is a binomial distribution (independent voters) doped in a beta binomial distribution (copycat voters). We find that the phase transition in this system is at the upper limit of t, where t is the time (or the number of the votes). Our model contains three phases. If copycats constitute a majority or even half of the total voters, the voting rate converges more slowly than it would in a binomial distribution. If independents constitute the majority of voters, the voting rate converges at the same rate as it would in a binomial distribution. We also study why it is difficult to estimate the conclusion of a Keynesian beauty contest when there is an information cascade.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2014-01-01
Unknown risks are introduced into failure critical systems when probability of detection (POD) capabilities are accepted without a complete understanding of the statistical method applied and the interpretation of the statistical results. The presence of this risk in the nondestructive evaluation (NDE) community is revealed in common statements about POD. These statements are often interpreted in a variety of ways and therefore, the very existence of the statements identifies the need for a more comprehensive understanding of POD methodologies. Statistical methodologies have data requirements to be met, procedures to be followed, and requirements for validation or demonstration of adequacy of the POD estimates. Risks are further enhanced due to the wide range of statistical methodologies used for determining the POD capability. Receiver/Relative Operating Characteristics (ROC) Display, simple binomial, logistic regression, and Bayes' rule POD methodologies are widely used in determining POD capability. This work focuses on Hit-Miss data to reveal the framework of the interrelationships between Receiver/Relative Operating Characteristics Display, simple binomial, logistic regression, and Bayes' Rule methodologies as they are applied to POD. Knowledge of these interrelationships leads to an intuitive and global understanding of the statistical data, procedural and validation requirements for establishing credible POD estimates.
Buckland, Steeves; Cole, Nik C; Aguirre-Gutiérrez, Jesús; Gallagher, Laura E; Henshaw, Sion M; Besnard, Aurélien; Tucker, Rachel M; Bachraz, Vishnu; Ruhomaun, Kevin; Harris, Stephen
2014-01-01
The invasion of the giant Madagascar day gecko Phelsuma grandis has increased the threats to the four endemic Mauritian day geckos (Phelsuma spp.) that have survived on mainland Mauritius. We had two main aims: (i) to predict the spatial distribution and overlap of P. grandis and the endemic geckos at a landscape level; and (ii) to investigate the effects of P. grandis on the abundance and risks of extinction of the endemic geckos at a local scale. An ensemble forecasting approach was used to predict the spatial distribution and overlap of P. grandis and the endemic geckos. We used hierarchical binomial mixture models and repeated visual estimate surveys to calculate the abundance of the endemic geckos in sites with and without P. grandis. The predicted range of each species varied from 85 km2 to 376 km2. Sixty percent of the predicted range of P. grandis overlapped with the combined predicted ranges of the four endemic geckos; 15% of the combined predicted ranges of the four endemic geckos overlapped with P. grandis. Levin's niche breadth varied from 0.140 to 0.652 between P. grandis and the four endemic geckos. The abundance of endemic geckos was 89% lower in sites with P. grandis compared to sites without P. grandis, and the endemic geckos had been extirpated at four of ten sites we surveyed with P. grandis. Species Distribution Modelling, together with the breadth metrics, predicted that P. grandis can partly share the equivalent niche with endemic species and survive in a range of environmental conditions. We provide strong evidence that smaller endemic geckos are unlikely to survive in sympatry with P. grandis. This is a cause of concern in both Mauritius and other countries with endemic species of Phelsuma.
Exact tests using two correlated binomial variables in contemporary cancer clinical trials.
Yu, Jihnhee; Kepner, James L; Iyer, Renuka
2009-12-01
New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.
A review on models for count data with extra zeros
NASA Astrophysics Data System (ADS)
Zamri, Nik Sarah Nik; Zamzuri, Zamira Hasanah
2017-04-01
Typically, the zero inflated models are usually used in modelling count data with excess zeros. The existence of the extra zeros could be structural zeros or random which occur by chance. These types of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences. As found in the literature, the most popular zero inflated models used are zero inflated Poisson and zero inflated negative binomial. Recently, more complex models have been developed to account for overdispersion and unobserved heterogeneity. In addition, more extended distributions are also considered in modelling data with this feature. In this paper, we review related literature, provide a recent development and summary on models for count data with extra zeros.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
A crash-prediction model for multilane roads.
Caliendo, Ciro; Guida, Maurizio; Parisi, Alessandra
2007-07-01
Considerable research has been carried out in recent years to establish relationships between crashes and traffic flow, geometric infrastructure characteristics and environmental factors for two-lane rural roads. Crash-prediction models focused on multilane rural roads, however, have rarely been investigated. In addition, most research has paid but little attention to the safety effects of variables such as stopping sight distance and pavement surface characteristics. Moreover, the statistical approaches have generally included Poisson and Negative Binomial regression models, whilst Negative Multinomial regression model has been used to a lesser extent. Finally, as far as the authors are aware, prediction models involving all the above-mentioned factors have still not been developed in Italy for multilane roads, such as motorways. Thus, in this paper crash-prediction models for a four-lane median-divided Italian motorway were set up on the basis of accident data observed during a 5-year monitoring period extending between 1999 and 2003. The Poisson, Negative Binomial and Negative Multinomial regression models, applied separately to tangents and curves, were used to model the frequency of accident occurrence. Model parameters were estimated by the Maximum Likelihood Method, and the Generalized Likelihood Ratio Test was applied to detect the significant variables to be included in the model equation. Goodness-of-fit was measured by means of both the explained fraction of total variation and the explained fraction of systematic variation. The Cumulative Residuals Method was also used to test the adequacy of a regression model throughout the range of each variable. The candidate set of explanatory variables was: length (L), curvature (1/R), annual average daily traffic (AADT), sight distance (SD), side friction coefficient (SFC), longitudinal slope (LS) and the presence of a junction (J). Separate prediction models for total crashes and for fatal and injury crashes
A comparison of LMC and SDL complexity measures on binomial distributions
NASA Astrophysics Data System (ADS)
Piqueira, José Roberto C.
2016-02-01
The concept of complexity has been widely discussed in the last forty years, with a lot of thinking contributions coming from all areas of the human knowledge, including Philosophy, Linguistics, History, Biology, Physics, Chemistry and many others, with mathematicians trying to give a rigorous view of it. In this sense, thermodynamics meets information theory and, by using the entropy definition, López-Ruiz, Mancini and Calbet proposed a definition for complexity that is referred as LMC measure. Shiner, Davison and Landsberg, by slightly changing the LMC definition, proposed the SDL measure and the both, LMC and SDL, are satisfactory to measure complexity for a lot of problems. Here, SDL and LMC measures are applied to the case of a binomial probability distribution, trying to clarify how the length of the data set implies complexity and how the success probability of the repeated trials determines how complex the whole set is.
The five-factor model of impulsivity-like traits and emotional lability in aggressive behavior.
Dvorak, Robert D; Pearson, Matthew R; Kuvaas, Nicholas J
2013-01-01
Factors that increase automatic psychological processes may result in impulsive action and, consequently, aggressive behavior. The current cross-sectional study examined the association between the five-factor model of impulsivity-like traits (negative urgency, positive urgency, premeditation, perseverance, and sensation seeking), emotional lability, and physically aggressive behaviors among college students (n = 481) in a negative binomial hurdle model. In the logistic portion of the model, emotional lability was related to a higher likelihood of engaging in aggressive acts in the past 6 months. The association between emotional lability and the likelihood of aggressive behavior was moderated by two impulsivity-like traits: negative urgency and positive urgency. Specifically, emotional lability was related to engaging in aggressive acts among those with high negative urgency, and among those with low positive urgency. In the count portion of the model, emotional lability was uniquely related to the number of aggressive acts in the past 6 months. Our results indicate that emotional lability and facets of impulsivity interactively relate to engagement in aggressive behavior, suggesting that these variables be integrated into models of aggression. © 2013 Wiley Periodicals, Inc.
Relaxed Poisson cure rate models.
Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N
2016-03-01
The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Addiction Motivation Reformulated: An Affective Processing Model of Negative Reinforcement
ERIC Educational Resources Information Center
Baker, Timothy B.; Piper, Megan E.; McCarthy, Danielle E.; Majeskie, Matthew R.; Fiore, Michael C.
2004-01-01
This article offers a reformulation of the negative reinforcement model of drug addiction and proposes that the escape and avoidance of negative affect is the prepotent motive for addictive drug use. The authors posit that negative affect is the motivational core of the withdrawal syndrome and argue that, through repeated cycles of drug use and…
Garrido-Balsells, José María; Jurado-Navas, Antonio; Paris, José Francisco; Castillo-Vazquez, Miguel; Puerta-Notario, Antonio
2015-03-09
In this paper, a novel and deeper physical interpretation on the recently published Málaga or ℳ statistical distribution is provided. This distribution, which is having a wide acceptance by the scientific community, models the optical irradiance scintillation induced by the atmospheric turbulence. Here, the analytical expressions previously published are modified in order to express them by a mixture of the known Generalized-K and discrete Binomial and Negative Binomial distributions. In particular, the probability density function (pdf) of the ℳ model is now obtained as a linear combination of these Generalized-K pdf, in which the coefficients depend directly on the parameters of the ℳ distribution. In this way, the Málaga model can be physically interpreted as a superposition of different optical sub-channels each of them described by the corresponding Generalized-K fading model and weighted by the ℳ dependent coefficients. The expressions here proposed are simpler than the equations of the original ℳ model and are validated by means of numerical simulations by generating ℳ -distributed random sequences and their associated histogram. This novel interpretation of the Málaga statistical distribution provides a valuable tool for analyzing the performance of atmospheric optical channels for every turbulence condition.
Categorical Data Analysis Using a Skewed Weibull Regression Model
NASA Astrophysics Data System (ADS)
Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano
2018-03-01
In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Zhang, Changsheng; Cai, Hongmin; Huang, Jingying; Song, Yan
2016-09-17
Variations in DNA copy number have an important contribution to the development of several diseases, including autism, schizophrenia and cancer. Single-cell sequencing technology allows the dissection of genomic heterogeneity at the single-cell level, thereby providing important evolutionary information about cancer cells. In contrast to traditional bulk sequencing, single-cell sequencing requires the amplification of the whole genome of a single cell to accumulate enough samples for sequencing. However, the amplification process inevitably introduces amplification bias, resulting in an over-dispersing portion of the sequencing data. Recent study has manifested that the over-dispersed portion of the single-cell sequencing data could be well modelled by negative binomial distributions. We developed a read-depth based method, nbCNV to detect the copy number variants (CNVs). The nbCNV method uses two constraints-sparsity and smoothness to fit the CNV patterns under the assumption that the read signals are negatively binomially distributed. The problem of CNV detection was formulated as a quadratic optimization problem, and was solved by an efficient numerical solution based on the classical alternating direction minimization method. Extensive experiments to compare nbCNV with existing benchmark models were conducted on both simulated data and empirical single-cell sequencing data. The results of those experiments demonstrate that nbCNV achieves superior performance and high robustness for the detection of CNVs in single-cell sequencing data.
Bisseleua, D H B; Vidal, Stefan
2011-02-01
The spatio-temporal distribution of Sahlbergella singularis Haglung, a major pest of cacao trees (Theobroma cacao) (Malvaceae), was studied for 2 yr in traditional cacao forest gardens in the humid forest area of southern Cameroon. The first objective was to analyze the dispersion of this insect on cacao trees. The second objective was to develop sampling plans based on fixed levels of precision for estimating S. singularis populations. The following models were used to analyze the data: Taylor's power law, Iwao's patchiness regression, the Nachman model, and the negative binomial distribution. Our results document that Taylor's power law was a better fit for the data than the Iwao and Nachman models. Taylor's b and Iwao's β were both significantly >1, indicating that S. singularis aggregated on specific trees. This result was further supported by the calculated common k of 1.75444. Iwao's α was significantly <0, indicating that the basic distribution component of S. singularis was the individual insect. Comparison of negative binomial (NBD) and Nachman models indicated that the NBD model was appropriate for studying S. singularis distribution. Optimal sample sizes for fixed precision levels of 0.10, 0.15, and 0.25 were estimated with Taylor's regression coefficients. Required sample sizes increased dramatically with increasing levels of precision. This is the first study on S. singularis dispersion in cacao plantations. Sampling plans, presented here, should be a tool for research on population dynamics and pest management decisions of mirid bugs on cacao. © 2011 Entomological Society of America
Evaluation of surrogate measures for pedestrian safety in various road and roadside environments.
DOT National Transportation Integrated Search
2012-10-01
This report presents an investigation of pedestrian conflicts and crash count models to learn which exposure measures and roadway or roadside characteristics significantly influence pedestrian safety at road crossings. Negative binomial models were e...
Model of a Negatively Curved Two-Dimensional Space.
ERIC Educational Resources Information Center
Eckroth, Charles A.
1995-01-01
Describes the construction of models of two-dimensional surfaces with negative curvature that are used to illustrate differences in the triangle sum rule for the various Big Bang Theories of the universe. (JRH)
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
The binomial work-health in the transit of Curitiba city.
Tokars, Eunice; Moro, Antonio Renato Pereira; Cruz, Roberto Moraes
2012-01-01
The working activity in traffic of the big cities complex interacts with the environment is often in unsafe and unhealthy imbalance favoring the binomial work - health. The aim of this paper was to analyze the relationship between work and health of taxi drivers in Curitiba, Brazil. This cross-sectional observational study with 206 individuals used a questionnaire on the organization's profile and perception of the environment and direct observation of work. It was found that the majority are male, aged between 26 and 49 years and has a high school degree. They are sedentary, like making a journey from 8 to 12 hours. They consider a stressful profession, related low back pain and are concerned about safety and accidents. 40% are smokers and consume alcoholic drink and 65% do not have or do not use devices of comfort. Risk factors present in the daily taxi constraints cause physical, cognitive and organizational and can affect your performance. It is concluded that the taxi drivers must change the unhealthy lifestyle, requiring a more efficient management of government authorities for this work is healthy and safe for all involved.
Temporary disaster debris management site identification using binomial cluster analysis and GIS.
Grzeda, Stanislaw; Mazzuchi, Thomas A; Sarkani, Shahram
2014-04-01
An essential component of disaster planning and preparation is the identification and selection of temporary disaster debris management sites (DMS). However, since DMS identification is a complex process involving numerous variable constraints, many regional, county and municipal jurisdictions initiate this process during the post-disaster response and recovery phases, typically a period of severely stressed resources. Hence, a pre-disaster approach in identifying the most likely sites based on the number of locational constraints would significantly contribute to disaster debris management planning. As disasters vary in their nature, location and extent, an effective approach must facilitate scalability, flexibility and adaptability to variable local requirements, while also being generalisable to other regions and geographical extents. This study demonstrates the use of binomial cluster analysis in potential DMS identification in a case study conducted in Hamilton County, Indiana. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.
Measuring the emergence of tobacco dependence: the contribution of negative reinforcement models.
Eissenberg, Thomas
2004-06-01
This review of negative reinforcement models of drug dependence is part of a series that takes the position that a complete understanding of current concepts of dependence will facilitate the development of reliable and valid measures of the emergence of tobacco dependence. Other reviews within the series consider models that emphasize positive reinforcement and social learning/cognitive models. This review summarizes negative reinforcement in general and then presents four current negative reinforcement models that emphasize withdrawal, classical conditioning, self-medication and opponent-processes. For each model, the paper outlines central aspects of dependence, conceptualization of dependence development and influences that the model might have on current and future measures of dependence. Understanding how drug dependence develops will be an important part of future successful tobacco dependence measurement, prevention and treatment strategies.
Wells-Parker, Elisabeth; Mann, Robert E; Dill, Patricia L; Stoduto, Gina; Shuggi, Rania; Cross, Ginger W
2009-05-01
This review summarizes evidence on negative affect among drinking drivers. Elevations in negative affect, including depressed mood, anxiety and hostility, have long been noted in convicted drinking drivers, and recent evidence suggests an association between negative affect and driving after drinking in the general population. Previous efforts to understand the significance of this negative affective state have ranged from suggestions that it may play a causal role in drinking driving to suggestions that it may interfere with response to treatment and remedial interventions. Recent studies have uncovered an important paradox involving negative affect among convicted drinking drivers (hereafter DUI offenders). DUI offenders with high levels of negative affect recidivated more frequently following a DUI program than did those reporting no or minimal negative affect. However, when a brief supportive motivational intervention was added to the program, offenders with high negative affect levels showed lower recidivism rates than did those with no or minimal negative affect. The review includes studies from the general literature on alcohol treatment in which the same negative affect paradox was reported. In an attempt to understand this paradox, we present a conceptual model involving well-established psychological processes, with a focus on salient discrepancy, the crucial component of cognitive dissonance. In this model, negative affect plays an important role in motivating both continued high-risk drinking as well as therapeutic change. This model suggests that links between motivational states and negative affective processes may be more complex than previously thought. Implications for intervention with DUI offenders are discussed.
ERIC Educational Resources Information Center
Abrahamson, Dor
2009-01-01
This article reports on a case study from a design-based research project that investigated how students make sense of the disciplinary tools they are taught to use, and specifically, what personal, interpersonal, and material resources support this process. The probability topic of binomial distribution was selected due to robust documentation of…
Lord, Dominique; Washington, Simon P; Ivan, John N
2005-01-01
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate
Modeling species-abundance relationships in multi-species collections
Peng, S.; Yin, Z.; Ren, H.; Guo, Q.
2003-01-01
Species-abundance relationship is one of the most fundamental aspects of community ecology. Since Motomura first developed the geometric series model to describe the feature of community structure, ecologists have developed many other models to fit the species-abundance data in communities. These models can be classified into empirical and theoretical ones, including (1) statistical models, i.e., negative binomial distribution (and its extension), log-series distribution (and its extension), geometric distribution, lognormal distribution, Poisson-lognormal distribution, (2) niche models, i.e., geometric series, broken stick, overlapping niche, particulate niche, random assortment, dominance pre-emption, dominance decay, random fraction, weighted random fraction, composite niche, Zipf or Zipf-Mandelbrot model, and (3) dynamic models describing community dynamics and restrictive function of environment on community. These models have different characteristics and fit species-abundance data in various communities or collections. Among them, log-series distribution, lognormal distribution, geometric series, and broken stick model have been most widely used.
Motion sickness: a negative reinforcement model.
Bowins, Brad
2010-01-15
Theories pertaining to the "why" of motion sickness are in short supply relative to those detailing the "how." Considering the profoundly disturbing and dysfunctional symptoms of motion sickness, it is difficult to conceive of why this condition is so strongly biologically based in humans and most other mammalian and primate species. It is posited that motion sickness evolved as a potent negative reinforcement system designed to terminate motion involving sensory conflict or postural instability. During our evolution and that of many other species, motion of this type would have impaired evolutionary fitness via injury and/or signaling weakness and vulnerability to predators. The symptoms of motion sickness strongly motivate the individual to terminate the offending motion by early avoidance, cessation of movement, or removal of oneself from the source. The motion sickness negative reinforcement mechanism functions much like pain to strongly motivate evolutionary fitness preserving behavior. Alternative why theories focusing on the elimination of neurotoxins and the discouragement of motion programs yielding vestibular conflict suffer from several problems, foremost that neither can account for the rarity of motion sickness in infants and toddlers. The negative reinforcement model proposed here readily accounts for the absence of motion sickness in infants and toddlers, in that providing strong motivation to terminate aberrant motion does not make sense until a child is old enough to act on this motivation.
Negative frequencies in wave propagation: A microscopic model
NASA Astrophysics Data System (ADS)
Horsley, S. A. R.; Bugler-Lamb, S.
2016-06-01
A change in the sign of the frequency of a wave between two inertial reference frames corresponds to a reversal of the phase velocity. Yet from the point of view of the relation E =ℏ ω , a positive quantum of energy apparently becomes a negative-energy one. This is physically distinct from a change in the sign of the wave vector and can be associated with various effects such as Cherenkov radiation, quantum friction, and the Hawking effect. In this work we provide a more detailed understanding of these negative-frequency modes based on a simple microscopic model of a dielectric medium as a lattice of scatterers. We calculate the classical and quantum mechanical radiation damping of an oscillator moving through such a lattice and find that the modes where the frequency has changed sign contribute negatively. In terms of the lattice of scatterers we find that this negative radiation damping arises due to the phase of the periodic force experienced by the oscillator due to the relative motion of the lattice.
12 CFR Appendix B to Part 222 - Model Notices of Furnishing Negative Information
Code of Federal Regulations, 2010 CFR
2018-01-01
... your credit report. Model Notice B-2 We have told a credit bureau about a late payment, missed payment... 12 Banks and Banking 3 2018-01-01 2018-01-01 false Model Notices of Furnishing Negative... Appendix B to Part 222—Model Notices of Furnishing Negative Information a. Although use of the model...
Negative Integer Understanding: Characterizing First Graders' Mental Models
ERIC Educational Resources Information Center
Bofferding, Laura
2014-01-01
This article presents results of a research study. Sixty-one first graders' responses to interview questions about negative integer values and order and directed magnitudes were examined to characterize the students' mental models. The models reveal that initially, students overrelied on various combinations of whole-number principles as…
Bakbergenuly, Ilyas; Morgenthaler, Stephan
2016-01-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062
Estimation of the cure rate in Iranian breast cancer patients.
Rahimzadeh, Mitra; Baghestani, Ahmad Reza; Gohari, Mahmood Reza; Pourhoseingholi, Mohamad Amin
2014-01-01
Although the Cox's proportional hazard model is the popular approach for survival analysis to investigate significant risk factors of cancer patient survival, it is not appropriate in the case of log-term disease free survival. Recently, cure rate models have been introduced to distinguish between clinical determinants of cure and variables associated with the time to event of interest. The aim of this study was to use a cure rate model to determine the clinical associated factors for cure rates of patients with breast cancer (BC). This prospective cohort study covered 305 patients with BC, admitted at Shahid Faiazbakhsh Hospital, Tehran, during 2006 to 2008 and followed until April 2012. Cases of patient death were confirmed by telephone contact. For data analysis, a non-mixed cure rate model with Poisson distribution and negative binomial distribution were employed. All analyses were carried out using a developed Macro in WinBugs. Deviance information criteria (DIC) were employed to find the best model. The overall 1-year, 3-year and 5-year relative survival rates were 97%, 89% and 74%. Metastasis and stage of BC were the significant factors, but age was significant only in negative binomial model. The DIC also showed that the negative binomial model had a better fit. This study indicated that, metastasis and stage of BC were identified as the clinical criteria for cure rates. There are limited studies on BC survival which employed these cure rate models to identify the clinical factors associated with cure. These models are better than Cox, in the case of long-term survival.
12 CFR Appendix B to Part 1022 - Model Notices of Furnishing Negative Information
Code of Federal Regulations, 2010 CFR
2018-01-01
... 12 Banks and Banking 8 2018-01-01 2018-01-01 false Model Notices of Furnishing Negative... REPORTING (REGULATION V) Pt. 1022, App. B Appendix B to Part 1022—Model Notices of Furnishing Negative Information a. Although use of the model notices is not required, a financial institution that is subject to...
Development of enhanced pavement deterioration curves.
DOT National Transportation Integrated Search
2016-10-01
This report describes the research performed by the Center for Sustainable Transportation Infrastructure (CSTI) at the Virginia Tech Transportation Institute (VTTI) to develop a pavement condition prediction model, using (negative binomial) regressio...
Lockwood, Penelope; Marshall, Tara C; Sadler, Pamela
2005-03-01
In two studies, cross-cultural differences in reactions to positive and negative role models were examined. The authors predicted that individuals from collectivistic cultures, who have a stronger prevention orientation, would be most motivated by negative role models, who highlight a strategy of avoiding failure; individuals from individualistic cultures, who have a stronger promotion focus, would be most motivated by positive role models, who highlight a strategy of pursuing success. In Study 1, the authors examined participants' reported preferences for positive and negative role models. Asian Canadian participants reported finding negative models more motivating than did European Canadians; self-construals and regulatory focus mediated cultural differences in reactions to role models. In Study 2, the authors examined the impact of role models on the academic motivation of Asian Canadian and European Canadian participants. Asian Canadians were motivated only by a negative model, and European Canadians were motivated only by a positive model.
NASA Astrophysics Data System (ADS)
Teyssedre, G.; Vu, T. T. N.; Laurent, C.
2015-12-01
Among features observed in polyethylene materials under relatively high field, space charge packets, consisting in a pulse of net charge that remains in the form of a pulse as it crosses the insulation, are repeatedly observed but without complete theory explaining their formation and propagation. Positive charge packets are more often reported, and the models based on negative differential mobility(NDM) for the transport of holes could account for some charge packets phenomenology. Conversely, NDM for electrons transport has never been reported so far. The present contribution reports space charge measurements by pulsed electroacoustic method on miniature cables that are model of HVDC cables. The measurements were realized at room temperature or with a temperature gradient of 10 °C through the insulation under DC fields on the order 30-60 kV/mm. Space charge results reveal systematic occurrence of a negative front of charges generated at the inner electrode that moves toward the outer electrode at the beginning of the polarization step. It is observed that the transit time of the front of negative charge increases, and therefore the mobility decreases, with the applied voltage. Further, the estimated mobility, in the range 10-14-10-13 m2 V-1 s-1 for the present results, increases when the temperature increases for the same condition of applied voltage. The features substantiate the hypothesis of negative differential mobility used for modelling space charge packets.
Mallick, Himel; Tiwari, Hemant K.
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice
Mallick, Himel; Tiwari, Hemant K
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.
Decision-support models for empiric antibiotic selection in Gram-negative bloodstream infections.
MacFadden, D R; Coburn, B; Shah, N; Robicsek, A; Savage, R; Elligsen, M; Daneman, N
2018-04-25
Early empiric antibiotic therapy in patients can improve clinical outcomes in Gram-negative bacteraemia. However, the widespread prevalence of antibiotic-resistant pathogens compromises our ability to provide adequate therapy while minimizing use of broad antibiotics. We sought to determine whether readily available electronic medical record data could be used to develop predictive models for decision support in Gram-negative bacteraemia. We performed a multi-centre cohort study, in Canada and the USA, of hospitalized patients with Gram-negative bloodstream infection from April 2010 to March 2015. We analysed multivariable models for prediction of antibiotic susceptibility at two empiric windows: Gram-stain-guided and pathogen-guided treatment. Decision-support models for empiric antibiotic selection were developed based on three clinical decision thresholds of acceptable adequate coverage (80%, 90% and 95%). A total of 1832 patients with Gram-negative bacteraemia were evaluated. Multivariable models showed good discrimination across countries and at both Gram-stain-guided (12 models, areas under the curve (AUCs) 0.68-0.89, optimism-corrected AUCs 0.63-0.85) and pathogen-guided (12 models, AUCs 0.75-0.98, optimism-corrected AUCs 0.64-0.95) windows. Compared to antibiogram-guided therapy, decision-support models of antibiotic selection incorporating individual patient characteristics and prior culture results have the potential to increase use of narrower-spectrum antibiotics (in up to 78% of patients) while reducing inadequate therapy. Multivariable models using readily available epidemiologic factors can be used to predict antimicrobial susceptibility in infecting pathogens with reasonable discriminatory ability. Implementation of sequential predictive models for real-time individualized empiric antibiotic decision-making has the potential to both optimize adequate coverage for patients while minimizing overuse of broad-spectrum antibiotics, and therefore requires
Analyzing crash frequency in freeway tunnels: A correlated random parameters approach.
Hou, Qinzhong; Tarko, Andrew P; Meng, Xianghai
2018-02-01
The majority of past road safety studies focused on open road segments while only a few focused on tunnels. Moreover, the past tunnel studies produced some inconsistent results about the safety effects of the traffic patterns, the tunnel design, and the pavement conditions. The effects of these conditions therefore remain unknown, especially for freeway tunnels in China. The study presented in this paper investigated the safety effects of these various factors utilizing a four-year period (2009-2012) of data as well as three models: 1) a random effects negative binomial model (RENB), 2) an uncorrelated random parameters negative binomial model (URPNB), and 3) a correlated random parameters negative binomial model (CRPNB). Of these three, the results showed that the CRPNB model provided better goodness-of-fit and offered more insights into the factors that contribute to tunnel safety. The CRPNB was not only able to allocate the part of the otherwise unobserved heterogeneity to the individual model parameters but also was able to estimate the cross-correlations between these parameters. Furthermore, the study results showed that traffic volume, tunnel length, proportion of heavy trucks, curvature, and pavement rutting were associated with higher frequencies of traffic crashes, while the distance to the tunnel wall, distance to the adjacent tunnel, distress ratio, International Roughness Index (IRI), and friction coefficient were associated with lower crash frequencies. In addition, the effects of the heterogeneity of the proportion of heavy trucks, the curvature, the rutting depth, and the friction coefficient were identified and their inter-correlations were analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Negative emotionality across diagnostic models: RDoC, DSM-5 Section III, and FFM.
Gore, Whitney L; Widiger, Thomas A
2018-03-01
The research domain criteria (RDoC) were established in an effort to explore underlying dimensions that cut across many existing disorders and to provide an alternative to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). One purpose of the present study was to suggest a potential alignment of RDoC negative valence with 2 other dimensional models of negative emotionality: five-factor model (FFM) neuroticism and the DSM-5 Section III negative affectivity. A second purpose of the study, though, was to compare their coverage of negative emotionality, more specifically with respect to affective instability. Participants were adult community residents (N = 90) currently in mental health treatment. Participants received self-report measures of RDoC negative valence, FFM neuroticism, and DSM-5 Section III negative affectivity, along with measures of affective instability, borderline personality disorder, and impairment. Findings suggested that RDoC negative valence is commensurate with FFM neuroticism and DSM-5 Section III negative affectivity, and it would be beneficial if it was expanded to include affective instability. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Crans, Gerald G; Shuster, Jonathan J
2008-08-15
The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd
Investigation on financial crises with the negative-information-propagation-induced model
NASA Astrophysics Data System (ADS)
Fan, Feng-Hua; Deng, Yanbin; Huang, Yong-Chang
2017-03-01
We first argue about the similarity between the propagation phenomenon of negative information about potential deterioration of economic situation in group of investors and the propagation phenomenon of infectious disease in crowd Applying the negative-information-propagation-induced model built based on above argument, we investigate the relationship between the generation of financial crises and propagation effects of negative information We introduce the discrimination parameter to distinguish whether or not negative information will be propagated extensively in group of investors. We also introduce the target critical value of financial crises. By comparing the theoretically predicted ratio of the long term projected number of total investors to the total number of investors at some time as initial time with target critical value of financial crises, the model can provide real-time monitoring of whether the curve of total number of investors is progressing toward the direction of generating financial crises or running on track of financial markets safety. If at some time this ratio is computed to be less than the target critical value of financial crises, governments can take relevant measures to prevent the generation of financial crises in advance Governments' interference helps to recover the confidence of investors so that they never will again believe in negative information to continue their investment. Results from theoretical and numerical analysis show that the number of investors who hold the belief of potential deterioration of economic situation, and the number of investors who withdraw capital and depart from financial markets for avoiding business loss when governments make appropriate interference are lowered compared to that without appropriate governments' interference. The results show the effectiveness of governments in preventing financial crises from the viewpoint of the negative information-propagation-induced model, namely governments
A cardiovascular system model for lower-body negative pressure response
NASA Technical Reports Server (NTRS)
Mitchell, B. A., Jr.; Giese, R. P.
1971-01-01
Mathematical models used to study complex physiological control systems are discussed. Efforts were made to modify a model of the cardiovascular system for use in studying lower body negative pressure. A computer program was written which allows orderly, straightforward expansion to include exercise, metabolism (thermal stress), respiration, and other body functions.
How Do Negative Emotions Impair Self-Control? A Neural Model of Negative Urgency
Chester, David S.; Lynam, Donald R.; Milich, Richard; Powell, David K.; Andersen, Anders H.; DeWall, C. Nathan
2016-01-01
Self-control often fails when people experience negative emotions. Negative urgency represents the dispositional tendency to experience such self-control failure in response to negative affect. The neural underpinnings of negative urgency are not fully understood, nor is the more general phenomenon of self-control failure in response to negative emotions. Previous theorizing suggests that an insufficient, inhibitory response from the prefrontal cortex may be the culprit behind such self-control failure. However, we entertained an alternative hypothesis: negative emotions lead to self-control failure because they excessively tax inhibitory regions of the prefrontal cortex. Using fMRI, we compared the neural activity of people high in negative urgency with controls on an emotional, inhibitory Go/No-Go task. While experiencing negative (but not positive or neutral) emotions, participants high in negative urgency showed greater recruitment of inhibitory brain regions than controls. Suggesting a compensatory function, inhibitory accuracy among participants high in negative urgency was associated with greater prefrontal recruitment. Greater activity in the anterior insula on negatively-valenced, inhibitory trials predicted greater substance abuse one month and one year after the MRI scan among individuals high in negative urgency. These results suggest that, among people whose negative emotions often lead to self-control failure, excessive reactivity of the brain’s regulatory resources may be the culprit. PMID:26892861
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Pricing American Asian options with higher moments in the underlying distribution
NASA Astrophysics Data System (ADS)
Lo, Keng-Hsin; Wang, Kehluh; Hsu, Ming-Feng
2009-01-01
We develop a modified Edgeworth binomial model with higher moment consideration for pricing American Asian options. With lognormal underlying distribution for benchmark comparison, our algorithm is as precise as that of Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] if the number of the time steps increases. If the underlying distribution displays negative skewness and leptokurtosis as often observed for stock index returns, our estimates can work better than those in Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] and are very similar to the benchmarks in Hull and White [J. Hull, A. White, Efficient procedures for valuing European and American path-dependent options, J. Derivatives 1 (Fall) (1993) 21-31]. The numerical analysis shows that our modified Edgeworth binomial model can value American Asian options with greater accuracy and speed given higher moments in their underlying distribution.
Sauzet, Odile; Peacock, Janet L
2017-07-20
The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins) and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.
DOT National Transportation Integrated Search
2011-03-01
This report documents the calibration of the Highway Safety Manual (HSM) safety performance function (SPF) : for rural two-lane two-way roadway segments in Utah and the development of new models using negative : binomial and hierarchical Bayesian mod...
Venkataraman, Narayan; Ulfarsson, Gudmundur F; Shankar, Venky N
2013-10-01
A nine-year (1999-2007) continuous panel of crash histories on interstates in Washington State, USA, was used to estimate random parameter negative binomial (RPNB) models for various aggregations of crashes. A total of 21 different models were assessed in terms of four ways to aggregate crashes, by: (a) severity, (b) number of vehicles involved, (c) crash type, and by (d) location characteristics. The models within these aggregations include specifications for all severities (property damage only, possible injury, evident injury, disabling injury, and fatality), number of vehicles involved (one-vehicle to five-or-more-vehicle), crash type (sideswipe, same direction, overturn, head-on, fixed object, rear-end, and other), and location types (urban interchange, rural interchange, urban non-interchange, rural non-interchange). A total of 1153 directional road segments comprising of the seven Washington State interstates were analyzed, yielding statistical models of crash frequency based on 10,377 observations. These results suggest that in general there was a significant improvement in log-likelihood when using RPNB compared to a fixed parameter negative binomial baseline model. Heterogeneity effects are most noticeable for lighting type, road curvature, and traffic volume (ADT). Median lighting or right-side lighting are linked to increased crash frequencies in many models for more than half of the road segments compared to both-sides lighting. Both-sides lighting thereby appears to generally lead to a safety improvement. Traffic volume has a random parameter but the effect is always toward increasing crash frequencies as expected. However that the effect is random shows that the effect of traffic volume on crash frequency is complex and varies by road segment. The number of lanes has a random parameter effect only in the interchange type models. The results show that road segment-specific insights into crash frequency occurrence can lead to improved design policy and
Finite mixture modeling for vehicle crash data with application to hotspot identification.
Park, Byung-Jung; Lord, Dominique; Lee, Chungwon
2014-10-01
The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.
Application of simple negative feedback model for avalanche photodetectors investigation
NASA Astrophysics Data System (ADS)
Kushpil, V. V.
2009-10-01
A simple negative feedback model based on Miller's formula is used to investigate the properties of Avalanche Photodetectors (APDs). The proposed method can be applied to study classical APD as well as new type of devices, which are operating in the Internal Negative Feedback (INF) regime. The method shows a good sensitivity to technological APD parameters making it possible to use it as a tool to analyse various APD parameters. It also allows better understanding of the APD operation conditions. The simulations and experimental data analysis for different types of APDs are presented.
Design of Ultra-Wideband Tapered Slot Antenna by Using Binomial Transformer with Corrugation
NASA Astrophysics Data System (ADS)
Chareonsiri, Yosita; Thaiwirot, Wanwisa; Akkaraekthalin, Prayoot
2017-05-01
In this paper, the tapered slot antenna (TSA) with corrugation is proposed for UWB applications. The multi-section binomial transformer is used to design taper profile of the proposed TSA that does not involve using time consuming optimization. A step-by-step procedure for synthesis of the step impedance values related with step slot widths of taper profile is presented. The smooth taper can be achieved by fitting the smoothing curve to the entire step slot. The design of TSA based on this method yields results with a quite flat gain and wide impedance bandwidth covering UWB spectrum from 3.1 GHz to 10.6 GHz. To further improve the radiation characteristics, the corrugation is added on the both edges of the proposed TSA. The effects of different corrugation shapes on the improvement of antenna gain and front-to-back ratio (F-to-B ratio) are investigated. To demonstrate the validity of the design, the prototypes of TSA without and with corrugation are fabricated and measured. The results show good agreement between simulation and measurement.
Lee, J-H; Han, G; Fulp, W J; Giuliano, A R
2012-06-01
The Poisson model can be applied to the count of events occurring within a specific time period. The main feature of the Poisson model is the assumption that the mean and variance of the count data are equal. However, this equal mean-variance relationship rarely occurs in observational data. In most cases, the observed variance is larger than the assumed variance, which is called overdispersion. Further, when the observed data involve excessive zero counts, the problem of overdispersion results in underestimating the variance of the estimated parameter, and thus produces a misleading conclusion. We illustrated the use of four models for overdispersed count data that may be attributed to excessive zeros. These are Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial models. The example data in this article deal with the number of incidents involving human papillomavirus infection. The four models resulted in differing statistical inferences. The Poisson model, which is widely used in epidemiology research, underestimated the standard errors and overstated the significance of some covariates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teyssedre, G., E-mail: gilbert.teyssedre@laplace.univ-tlse.fr; Laurent, C.; CNRS, LAPLACE, F-31062 Toulouse
Among features observed in polyethylene materials under relatively high field, space charge packets, consisting in a pulse of net charge that remains in the form of a pulse as it crosses the insulation, are repeatedly observed but without complete theory explaining their formation and propagation. Positive charge packets are more often reported, and the models based on negative differential mobility(NDM) for the transport of holes could account for some charge packets phenomenology. Conversely, NDM for electrons transport has never been reported so far. The present contribution reports space charge measurements by pulsed electroacoustic method on miniature cables that are modelmore » of HVDC cables. The measurements were realized at room temperature or with a temperature gradient of 10 °C through the insulation under DC fields on the order 30–60 kV/mm. Space charge results reveal systematic occurrence of a negative front of charges generated at the inner electrode that moves toward the outer electrode at the beginning of the polarization step. It is observed that the transit time of the front of negative charge increases, and therefore the mobility decreases, with the applied voltage. Further, the estimated mobility, in the range 10{sup −14}–10{sup −13} m{sup 2} V{sup −1} s{sup −1} for the present results, increases when the temperature increases for the same condition of applied voltage. The features substantiate the hypothesis of negative differential mobility used for modelling space charge packets.« less
An analytical framework for estimating aquatic species density from environmental DNA
Chambert, Thierry; Pilliod, David S.; Goldberg, Caren S.; Doi, Hideyuki; Takahara, Teruhiko
2018-01-01
Environmental DNA (eDNA) analysis of water samples is on the brink of becoming a standard monitoring method for aquatic species. This method has improved detection rates over conventional survey methods and thus has demonstrated effectiveness for estimation of site occupancy and species distribution. The frontier of eDNA applications, however, is to infer species density. Building upon previous studies, we present and assess a modeling approach that aims at inferring animal density from eDNA. The modeling combines eDNA and animal count data from a subset of sites to estimate species density (and associated uncertainties) at other sites where only eDNA data are available. As a proof of concept, we first perform a cross-validation study using experimental data on carp in mesocosms. In these data, fish densities are known without error, which allows us to test the performance of the method with known data. We then evaluate the model using field data from a study on a stream salamander species to assess the potential of this method to work in natural settings, where density can never be known with absolute certainty. Two alternative distributions (Normal and Negative Binomial) to model variability in eDNA concentration data are assessed. Assessment based on the proof of concept data (carp) revealed that the Negative Binomial model provided much more accurate estimates than the model based on a Normal distribution, likely because eDNA data tend to be overdispersed. Greater imprecision was found when we applied the method to the field data, but the Negative Binomial model still provided useful density estimates. We call for further model development in this direction, as well as further research targeted at sampling design optimization. It will be important to assess these approaches on a broad range of study systems.
Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.
2013-01-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Dorazio, Robert M; Martin, Julien; Edwards, Holly H
2013-07-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Moineddin, Rahim; Meaney, Christopher; Agha, Mohammad; Zagorski, Brandon; Glazier, Richard Henry
2011-08-19
department utilization. Six different multiple regression models for count data were fitted to assess the influence of predictors on demand for emergency department services, including: Poisson, Negative Binomial, Zero-Inflated Poisson, Zero-Inflated Negative Binomial, Hurdle Poisson, and Hurdle Negative Binomial. Comparison of competing models was assessed by the Vuong test statistic. The CCHS cycle 2.1 respondents were a roughly equal mix of males (50.4%) and females (49.6%). The majority (86.2%) were young-middle aged adults between the ages of 20-64, living in predominantly urban environments (85.9%), with mid-high household incomes (92.2%) and well-educated, receiving at least a high-school diploma (84.1%). Many participants reported no chronic disease (51.9%), fell into a small number (0-5) of ambulatory diagnostic groups (62.3%), and perceived their health status as good/excellent (88.1%); however, were projected to have high Resource Utilization Band levels of health resource utilization (68.2%). These factors were largely stable for CCHS cycle 3.1 respondents. Factors influencing demand for emergency department services varied according to the severity of triage scores at initial presentation. For example, although a non-significant predictor of the odds of emergency department utilization in high severity cases, access to a primary care physician was a statistically significant predictor of the likelihood of emergency department utilization (OR: 0.69; 95% CI OR: 0.63-0.75) and the rate of emergency department utilization (RR: 0.57; 95% CI RR: 0.50-0.66) in low severity cases. Using a theoretically appropriate hurdle negative binomial regression model this unique study illustrates that access to a primary care physician is an important predictor of both the odds and rate of emergency department utilization in Ontario. Restructuring primary care services, with aims of increasing access to undersupplied populations may result in decreased emergency department
A new zero-inflated negative binomial methodology for latent category identification.
Blanchard, Simon J; DeSarbo, Wayne S
2013-04-01
We introduce a new statistical procedure for the identification of unobserved categories that vary between individuals and in which objects may span multiple categories. This procedure can be used to analyze data from a proposed sorting task in which individuals may simultaneously assign objects to multiple piles. The results of a synthetic example and a consumer psychology study involving categories of restaurant brands illustrate how the application of the proposed methodology to the new sorting task can account for a variety of categorization phenomena including multiple category memberships and for heterogeneity through individual differences in the saliency of latent category structures.
A New Zero-Inflated Negative Binomial Methodology for Latent Category Identification
ERIC Educational Resources Information Center
Blanchard, Simon J.; DeSarbo, Wayne S.
2013-01-01
We introduce a new statistical procedure for the identification of unobserved categories that vary between individuals and in which objects may span multiple categories. This procedure can be used to analyze data from a proposed sorting task in which individuals may simultaneously assign objects to multiple piles. The results of a synthetic…
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
Independent-particle models for light negative atomic ions
NASA Technical Reports Server (NTRS)
Ganas, P. S.; Talman, J. D.; Green, A. E. S.
1980-01-01
For the purposes of astrophysical, aeronomical, and laboratory application, a precise independent-particle model for electrons in negative atomic ions of the second and third period is discussed. The optimum-potential model (OPM) of Talman et al. (1979) is first used to generate numerical potentials for eight of these ions. Results for total energies and electron affinities are found to be very close to Hartree-Fock solutions. However, the OPM and HF electron affinities both depart significantly from experimental affinities. For this reason, two analytic potentials are developed whose inner energy levels are very close to the OPM and HF levels but whose last electron eigenvalues are adjusted precisely with the magnitudes of experimental affinities. These models are: (1) a four-parameter analytic characterization of the OPM potential and (2) a two-parameter potential model of the Green, Sellin, Zachor type. The system O(-) or e-O, which is important in upper atmospheric physics is examined in some detail.
Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.
Forti, Isabela Rodrigues Nogueira; Young, Robert John
2016-01-01
In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male) and Brazil (400 female and 400 male) to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001). In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour) it is hard to modify, hide or disguise, and it is highly polymorphic.
Generalized site occupancy models allowing for false positive and false negative errors
Royle, J. Andrew; Link, W.A.
2006-01-01
Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.
Rowse, Georgina; Webb, Thomas L.
2017-01-01
Background A growing body of evidence points to relationships between insomnia, negative affect, and paranoid thinking. However, studies are needed to examine (i) whether negative affect mediates the relation between insomnia and paranoid thinking, (ii) whether different types of insomnia exert different effects on paranoia, and (iii) to compare the impact of objective and self-reported sleeping difficulties. Method Structural equation modelling was therefore used to test competing models of the relationships between self-reported insomnia, negative affect, and paranoia. n = 348 participants completed measures of insomnia, negative affect and paranoia. A subset of these participants (n = 91) went on to monitor their sleep objectively (using a portable sleep monitor made by Zeo) for seven consecutive nights. Associations between objectively recorded sleep, negative affect, and paranoia were explored using linear regression. Results The findings supported a fully mediated model where self-reported delayed sleep onset, but not self-reported problems with sleep maintenance or objective measures of sleep, was directly associated with negative affect that, in turn, was associated with paranoia. There was no evidence of a direct association between delayed sleep onset or sleep maintenance problems and paranoia. Conclusions Taken together, the findings point to an association between perceived (but not objective) difficulties initially falling asleep (but not maintaining sleep) and paranoid thinking; a relationship that is fully mediated by negative affect. Future research should seek to disentangle the causal relationships between sleep, negative affect, and paranoia (e.g., by examining the effect of an intervention using prospective designs that incorporate experience sampling). Indeed, interventions might profitably target (i) perceived sleep quality, (ii) sleep onset, and / or (iii) emotion regulation as a route to reducing negative affect and, thus, paranoid thinking
Kinetic modeling of particle dynamics in H- negative ion sources (invited)
NASA Astrophysics Data System (ADS)
Hatayama, A.; Shibata, T.; Nishioka, S.; Ohta, M.; Yasumoto, M.; Nishida, K.; Yamamoto, T.; Miyamoto, K.; Fukano, A.; Mizuno, T.
2014-02-01
Progress in the kinetic modeling of particle dynamics in H- negative ion source plasmas and their comparisons with experiments are reviewed, and discussed with some new results. Main focus is placed on the following two topics, which are important for the research and development of large negative ion sources and high power H- ion beams: (i) Effects of non-equilibrium features of EEDF (electron energy distribution function) on H- production, and (ii) extraction physics of H- ions and beam optics.
The Distancing-Embracing model of the enjoyment of negative emotions in art reception.
Menninghaus, Winfried; Wagner, Valentin; Hanich, Julian; Wassiliwizky, Eugen; Jacobsen, Thomas; Koelsch, Stefan
2017-01-01
Why are negative emotions so central in art reception far beyond tragedy? Revisiting classical aesthetics in the light of recent psychological research, we present a novel model to explain this much discussed (apparent) paradox. We argue that negative emotions are an important resource for the arts in general, rather than a special license for exceptional art forms only. The underlying rationale is that negative emotions have been shown to be particularly powerful in securing attention, intense emotional involvement, and high memorability, and hence is precisely what artworks strive for. Two groups of processing mechanisms are identified that conjointly adopt the particular powers of negative emotions for art's purposes. The first group consists of psychological distancing mechanisms that are activated along with the cognitive schemata of art, representation, and fiction. These schemata imply personal safety and control over continuing or discontinuing exposure to artworks, thereby preventing negative emotions from becoming outright incompatible with expectations of enjoyment. This distancing sets the stage for a second group of processing components that allow art recipients to positively embrace the experiencing of negative emotions, thereby rendering art reception more intense, more interesting, more emotionally moving, more profound, and occasionally even more beautiful. These components include compositional interplays of positive and negative emotions, the effects of aesthetic virtues of using the media of (re)presentation (musical sound, words/language, color, shapes) on emotion perception, and meaning-making efforts. Moreover, our Distancing-Embracing model proposes that concomitant mixed emotions often help integrate negative emotions into altogether pleasurable trajectories.
Particle model of full-size ITER-relevant negative ion source.
Taccogna, F; Minelli, P; Ippolito, N
2016-02-01
This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.
Chen, Feng; Chen, Suren; Ma, Xiaoxiang
2016-01-01
Traffic and environmental conditions (e.g., weather conditions), which frequently change with time, have a significant impact on crash occurrence. Traditional crash frequency models with large temporal scales and aggregated variables are not sufficient to capture the time-varying nature of driving environmental factors, causing significant loss of critical information on crash frequency modeling. This paper aims at developing crash frequency models with refined temporal scales for complex driving environments, with such an effort providing more detailed and accurate crash risk information which can allow for more effective and proactive traffic management and law enforcement intervention. Zero-inflated, negative binomial (ZINB) models with site-specific random effects are developed with unbalanced panel data to analyze hourly crash frequency on highway segments. The real-time driving environment information, including traffic, weather and road surface condition data, sourced primarily from the Road Weather Information System, is incorporated into the models along with site-specific road characteristics. The estimation results of unbalanced panel data ZINB models suggest there are a number of factors influencing crash frequency, including time-varying factors (e.g., visibility and hourly traffic volume) and site-varying factors (e.g., speed limit). The study confirms the unique significance of the real-time weather, road surface condition and traffic data to crash frequency modeling. PMID:27322306
Jung, Yoon Suk; Park, Chan Hyuk; Kim, Nam Hee; Park, Jung Ho; Park, Dong Il; Sohn, Chong Il
2018-01-01
The fecal immunochemical test (FIT) has low sensitivity for detecting advanced colorectal neoplasia (ACRN); thus, a considerable portion of FIT-negative persons may have ACRN. We aimed to develop a risk-scoring model for predicting ACRN in FIT-negative persons. We reviewed the records of participants aged ≥40 years who underwent a colonoscopy and FIT during a health check-up. We developed a risk-scoring model for predicting ACRN in FIT-negative persons. Of 11,873 FIT-negative participants, 255 (2.1%) had ACRN. On the basis of the multivariable logistic regression model, point scores were assigned as follows among FIT-negative persons: age (per year from 40 years old), 1 point; current smoker, 10 points; overweight, 5 points; obese, 7 points; hypertension, 6 points; old cerebrovascular attack (CVA), 15 points. Although the proportion of ACRN in FIT-negative persons increased as risk scores increased (from 0.6% in the group with 0-4 points to 8.1% in the group with 35-39 points), it was significantly lower than that in FIT-positive persons (14.9%). However, there was no statistical difference between the proportion of ACRN in FIT-negative persons with ≥40 points and in FIT-positive persons (10.5% vs. 14.9%, P = 0.321). FIT-negative persons may need to undergo screening colonoscopy if they clinically have a high risk of ACRN. The scoring model based on age, smoking habits, overweight or obesity, hypertension, and old CVA may be useful in selecting and prioritizing FIT-negative persons for screening colonoscopy.
Is “Hit and Run” a Single Word? The Processing of Irreversible Binomials in Neglect Dyslexia
Arcara, Giorgio; Lacaita, Graziano; Mattaloni, Elisa; Passarini, Laura; Mondini, Sara; Benincà, Paola; Semenza, Carlo
2012-01-01
The present study is the first neuropsychological investigation into the problem of the mental representation and processing of irreversible binomials (IBs), i.e., word pairs linked by a conjunction (e.g., “hit and run,” “dead or alive”). In order to test their lexical status, the phenomenon of neglect dyslexia is explored. People with left-sided neglect dyslexia show a clear lexical effect: they can read IBs better (i.e., by dropping the leftmost words less frequently) when their components are presented in their correct order. This may be taken as an indication that they treat these constructions as lexical, not decomposable, elements. This finding therefore constitutes strong evidence that IBs tend to be stored in the mental lexicon as a whole and that this whole form is preferably addressed in the retrieval process. PMID:22347199
A unified engineering model of the first stroke in downward negative lightning
NASA Astrophysics Data System (ADS)
Nag, Amitabh; Rakov, Vladimir A.
2016-03-01
Each stroke in a negative cloud-to-ground lightning flash is composed of downward leader and upward return stroke processes, which are usually modeled individually. The first stroke leader is stepped and starts with preliminary breakdown (PB) which is often viewed as a separate process. We present the first unified engineering model for computing the electric field produced by a sequence of PB, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively charged channel extends downward in a stepped fashion during both the PB and leader stages. Each step involves a current wave that propagates upward along the newly formed channel section. Once the leader attaches to ground, an upward propagating return stroke neutralizes the charge deposited along the channel. Model-predicted electric fields are in reasonably good agreement with simultaneous measurements at both near (hundreds of meters, electrostatic field component is dominant) and far (tens of kilometers, radiation field component is dominant) distances from the lightning channel. Relations between the features of computed electric field waveforms and model input parameters are examined. It appears that peak currents associated with PB pulses are similar to return stroke peak currents, and the observed variation of electric radiation field peaks produced by leader steps at different heights above ground is influenced by the ground corona space charge.
Extended Poisson process modelling and analysis of grouped binary data.
Faddy, Malcolm J; Smith, David M
2012-05-01
A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Chen, Wen; Wang, Fajie
Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.
Football goal distributions and extremal statistics
NASA Astrophysics Data System (ADS)
Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.
2002-12-01
We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.
Xiao, Yundan; Zhang, Xiongqing; Ji, Ping
2015-01-01
Forest fires can cause catastrophic damage on natural resources. In the meantime, it can also bring serious economic and social impacts. Meteorological factors play a critical role in establishing conditions favorable for a forest fire. Effective prediction of forest fire occurrences could prevent or minimize losses. This paper uses count data models to analyze fire occurrence data which is likely to be dispersed and frequently contain an excess of zero counts (no fire occurrence). Such data have commonly been analyzed using count data models such as a Poisson model, negative binomial model (NB), zero-inflated models, and hurdle models. Data we used in this paper is collected from Qiannan autonomous prefecture of Guizhou province in China. Using the fire occurrence data from January to April (spring fire season) for the years 1996 through 2007, we introduced random effects to the count data models. In this study, the results indicated that the prediction achieved through NB model provided a more compelling and credible inferential basis for fitting actual forest fire occurrence, and mixed-effects model performed better than corresponding fixed-effects model in forest fire forecasting. Besides, among all meteorological factors, we found that relative humidity and wind speed is highly correlated with fire occurrence.
Ji, Ping
2015-01-01
Forest fires can cause catastrophic damage on natural resources. In the meantime, it can also bring serious economic and social impacts. Meteorological factors play a critical role in establishing conditions favorable for a forest fire. Effective prediction of forest fire occurrences could prevent or minimize losses. This paper uses count data models to analyze fire occurrence data which is likely to be dispersed and frequently contain an excess of zero counts (no fire occurrence). Such data have commonly been analyzed using count data models such as a Poisson model, negative binomial model (NB), zero-inflated models, and hurdle models. Data we used in this paper is collected from Qiannan autonomous prefecture of Guizhou province in China. Using the fire occurrence data from January to April (spring fire season) for the years 1996 through 2007, we introduced random effects to the count data models. In this study, the results indicated that the prediction achieved through NB model provided a more compelling and credible inferential basis for fitting actual forest fire occurrence, and mixed-effects model performed better than corresponding fixed-effects model in forest fire forecasting. Besides, among all meteorological factors, we found that relative humidity and wind speed is highly correlated with fire occurrence. PMID:25790309
REJEKI, Dwi Sarwani Sri; NURHAYATI, Nunung; AJI, Budi; MURHANDARWATI, E. Elsa Herdiana; KUSNANTO, Hari
2018-01-01
Background: Climatic and weather factors become important determinants of vector-borne diseases transmission like malaria. This study aimed to prove relationships between weather factors with considering human migration and previous case findings and malaria cases in endemic areas in Purworejo during 2005–2014. Methods: This study employed ecological time series analysis by using monthly data. The independent variables were the maximum temperature, minimum temperature, maximum humidity, minimum humidity, precipitation, human migration, and previous malaria cases, while the dependent variable was positive malaria cases. Three models of count data regression analysis i.e. Poisson model, quasi-Poisson model, and negative binomial model were applied to measure the relationship. The least Akaike Information Criteria (AIC) value was also performed to find the best model. Negative binomial regression analysis was considered as the best model. Results: The model showed that humidity (lag 2), precipitation (lag 3), precipitation (lag 12), migration (lag1) and previous malaria cases (lag 12) had a significant relationship with malaria cases. Conclusion: Weather, migration and previous malaria cases factors need to be considered as prominent indicators for the increase of malaria case projection. PMID:29900134
Modeling regional variation in riverine fish biodiversity in the Arkansas-White-Red River basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schweizer, Peter E; Jager, Yetta
The patterns of biodiversity in freshwater systems are shaped by biogeography, environmental gradients, and human-induced factors. In this study, we developed empirical models to explain fish species richness in subbasins of the Arkansas White Red River basin as a function of discharge, elevation, climate, land cover, water quality, dams, and longitudinal position. We used information-theoretic criteria to compare generalized linear mixed models and identified well-supported models. Subbasin attributes that were retained as predictors included discharge, elevation, number of downstream dams, percent forest, percent shrubland, nitrate, total phosphorus, and sediment. The random component of our models, which assumed a negative binomialmore » distribution, included spatial correlation within larger river basins and overdispersed residual variance. This study differs from previous biodiversity modeling efforts in several ways. First, obtaining likelihoods for negative binomial mixed models, and thereby avoiding reliance on quasi-likelihoods, has only recently become practical. We found the ranking of models based on these likelihood estimates to be more believable than that produced using quasi-likelihoods. Second, because we had access to a regional-scale watershed model for this river basin, we were able to include model-estimated water quality attributes as predictors. Thus, the resulting models have potential value as tools with which to evaluate the benefits of water quality improvements to fish.« less
De Bruyn, Sara; Wouters, Edwin; Ponnet, Koen; Van Damme, Joris; Maes, Lea; Van Hal, Guido
2018-02-12
Although alcohol is socially accepted in most Western societies, studies are clear about its associated negative consequences, especially among university and college students. Studies on the relationship between alcohol-related consequences and both beverage type and drinking onset, however, are scarce, especially in a European context. The aim of this research was, therefore, twofold: (1) What is the relationship between beverage type and the negative consequences experienced by students? and (2) Are these consequences determined by early drinking onset? We will examine these questions within the context of a wide range of alcohol-related consequences. The analyses are based on data collected by the inter-university project 'Head in the clouds?', measuring alcohol use among students in Flanders (Belgium). In total, a large dataset consisting of information from 19,253 anonymously participating students was available. Negative consequences were measured using a shortened version of the Core Alcohol and Drug Survey (CADS_D). Data were analysed using negative binomial regression. Results vary depending on the type of alcohol-related consequences: Personal negative consequences occur frequently among daily beer drinkers. However, a high rate of social negative consequences was recorded for both daily beer drinkers and daily spirits drinkers. Finally, early drinking onset was significantly associated with both personal and social negative consequences, and this association was especially strong between beer and spirits drinking onset and social negative consequences. Numerous negative consequences, both personal and social, are related to frequent beer and spirits drinking. Our findings indicate a close association between drinking beer and personal negative consequences as well as between drinking beer and/or spirits and social negative consequences. Similarly, early drinking onset has a major influence on the rates of both personal and social negative consequences
Yes, the GIGP Really Does Work--And Is Workable!
ERIC Educational Resources Information Center
Burrell, Quentin L.; Fenton, Michael R.
1993-01-01
Discusses the generalized inverse Gaussian-Poisson (GIGP) process for informetric modeling. Negative binomial distribution is discussed, construction of the GIGP process is explained, zero-truncated GIGP is considered, and applications of the process with journals, library circulation statistics, and database index terms are described. (50…
Crash data modeling with a generalized estimator.
Ye, Zhirui; Xu, Yueru; Lord, Dominique
2018-08-01
The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.
Li, Jun; Tibshirani, Robert
2015-01-01
We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or ‘sequencing depths’. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by ‘outliers’ in the data. We introduce a simple, nonparametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
Mendez, Bomar Rojas
2017-01-01
Background Improving access to delivery services does not guarantee access to quality obstetric care and better survival, and therefore, concerns for quality of maternal and newborn care in low- and middle-income countries have been raised. Our study explored characteristics associated with the quality of initial assessment, intrapartum, and immediate postpartum and newborn care, and further assessed the relationships along the continuum of care. Methods The 2010 Service Provision Assessment data of Kenya for 627 routine deliveries of women aged 15–49 were used. Quality of care measures were assessed using recently validated quality of care measures during initial assessment, intrapartum, and postpartum periods. Data were analyzed with negative binomial regression and structural equation modeling technique. Results The negative binomial regression results identified a number of determinants of quality, such as the level of health facilities, managing authority, presence of delivery fee, central electricity supply and clinical guideline for maternal and neonatal care. Our structural equation modeling (SEM) further demonstrated that facility characteristics were important determinants of quality for initial assessment and postpartum care, while characteristics at the provider level became more important in shaping the quality of intrapartum care. Furthermore we also noted that quality of initial assessment had a positive association with quality of intrapartum care (β = 0.71, p < 0.001), which in turn was positively associated with the quality of newborn and immediate postpartum care (β = 1.29, p = 0.004). Conclusions A continued focus on quality of care along the continuum of maternity care is important not only to mothers but also their newborns. Policymakers should therefore ensure that required resources, as well as adequate supervision and emphasis on the quality of obstetric care, are available. PMID:28520771
Predicting Children's Asthma Hospitalizations: Rural and Urban Differences in Texas
ERIC Educational Resources Information Center
Grineski, Sara E.
2009-01-01
Asthma is the number one chronic health condition facing children today; however, little is known about rural-urban inequalities in asthma. This "area effects on health" study examines rural-urban differences in childhood asthma hospitalizations within the state of Texas using negative binomial regression models. Effects associated with…
Naish, Suchithra; Hu, Wenbiao; Nicholls, Neville; Mackenzie, John S; Dale, Pat; McMichael, Anthony J; Tong, Shilu
2009-02-01
To assess the socio-environmental predictors of Barmah forest virus (BFV) transmission in coastal areas, Queensland, Australia. Data on BFV notified cases, climate, tidal levels and socioeconomic index for area (SEIFA) in six coastal cities, Queensland, for the period 1992-2001 were obtained from the relevant government agencies. Negative binomial regression models were used to assess the socio-environmental predictors of BFV transmission. The results show that maximum and minimum temperature, rainfall, relative humidity, high and low tide were statistically significantly associated with BFV incidence at lags 0-2 months. The fitted negative binomial regression models indicate a significant independent association of each of maximum temperature (beta = 0.139, P = 0.000), high tide (beta = 0.005, P = 0.000) and SEIFA index (beta = -0.010, P = 0.000) with BFV transmission after adjustment for confounding variables. The transmission of BFV disease in Queensland coastal areas seemed to be determined by a combination of local social and environmental factors. The model developed in this study may have applications in the control and prevention of BFV disease in these areas.
A 1D ion species model for an RF driven negative ion source
NASA Astrophysics Data System (ADS)
Turner, I.; Holmes, A. J. T.
2017-08-01
A one-dimensional model for an RF driven negative ion source has been developed based on an inductive discharge. The RF source differs from traditional filament and arc ion sources because there are no primary electrons present, and is simply composed of an antenna region (driver) and a main plasma discharge region. However the model does still make use of the classical plasma transport equations for particle energy and flow, which have previously worked well for modelling DC driven sources. The model has been developed primarily to model the Small Negative Ion Facility (SNIF) ion source at CCFE, but may be easily adapted to model other RF sources. Currently the model considers the hydrogen ion species, and provides a detailed description of the plasma parameters along the source axis, i.e. plasma temperature, density and potential, as well as current densities and species fluxes. The inputs to the model are currently the RF power, the magnetic filter field and the source gas pressure. Results from the model are presented and where possible compared to existing experimental data from SNIF, with varying RF power, source pressure.
12 CFR Appendix B to Part 222 - Model Notices of Furnishing Negative Information
Code of Federal Regulations, 2010 CFR
2010-01-01
... account.” Model Notice B-1 We may report information about your account to credit bureaus. Late payments, missed payments, or other defaults on your account may be reflected in your credit report. Model Notice B... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Model Notices of Furnishing Negative...
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Dahl, Roy W.; Keating, Karen; Salamone, Daryl J.; Levy, Laurence; Nag, Barindra; Sanborn, Joan A.
1987-01-01
This paper presents an algorithm (WHAMII) designed to solve the Artificial Intelligence Design Challenge at the 1987 AIAA Guidance, Navigation and Control Conference. The problem under consideration is a stochastic generalization of the traveling salesman problem in which travel costs can incur a penalty with a given probability. The variability in travel costs leads to a probability constraint with respect to violating the budget allocation. Given the small size of the problem (eleven cities), an approach is considered that combines partial tour enumeration with a heuristic city insertion procedure. For computational efficiency during both the enumeration and insertion procedures, precalculated binomial probabilities are used to determine an upper bound on the actual probability of violating the budget constraint for each tour. The actual probability is calculated for the final best tour, and additional insertions are attempted until the actual probability exceeds the bound.
Adaptive object tracking via both positive and negative models matching
NASA Astrophysics Data System (ADS)
Li, Shaomei; Gao, Chao; Wang, Yawen
2015-03-01
To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as abinary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.
Computational Aspects of N-Mixture Models
Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S
2015-01-01
The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629
Characterization of the ITER model negative ion source during long pulse operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemsworth, R.S.; Boilson, D.; Crowley, B.
2006-03-15
It is foreseen to operate the neutral beam system of the International Thermonuclear Experimental Reactor (ITER) for pulse lengths extending up to 1 h. The performance of the KAMABOKO III negative ion source, which is a model of the source designed for ITER, is being studied on the MANTIS test bed at Cadarache. This article reports the latest results from the characterization of the ion source, in particular electron energy distribution measurements and the comparison between positive ion and negative ion extraction from the source.
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Modelling road accident blackspots data with the discrete generalized Pareto distribution.
Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María
2014-10-01
This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sahin, Ceren; Doostdar, Nazanin; Neill, Joanna C
2016-10-01
Negative symptoms in schizophrenia remain an unmet clinical need. There is no licensed treatment specifically for this debilitating aspect of the disorder and effect sizes of new therapies are too small to make an impact on quality of life and function. Negative symptoms are multifactorial but often considered in terms of two domains, expressive deficit incorporating blunted affect and poverty of speech and avolition incorporating asociality and lack of drive. There is a clear need for improved understanding of the neurobiology of negative symptoms which can be enabled through the use of carefully validated animal models. While there are several tests for assessing sociability in animals, tests for blunted affect in schizophrenia are currently lacking. Two paradigms have recently been developed for assessing negative affect of relevance to depression in rats. Here we assess their utility for studying negative symptoms in schizophrenia using our well validated model for schizophrenia of sub-chronic (sc) treatment with Phencyclidine (PCP) in adult female rats. Results demonstrate that sc PCP treatment produces a significant negative affect bias in response to a high value reward in the optimistic and affective bias tests. Our results are not easily explained by the known cognitive deficits induced by sc PCP and support the hypothesis of a negative affective bias in this model. We suggest that further refinement of these two tests will provide a means to investigate the neurobiological basis of negative affect in schizophrenia, thus supporting the assessment of efficacy of new targets for this currently untreated symptom domain. Copyright © 2016 Elsevier B.V. All rights reserved.
Self-affirmation model for football goal distributions
NASA Astrophysics Data System (ADS)
Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.
2007-06-01
Analyzing football score data with statistical techniques, we investigate how the highly co-operative nature of the game is reflected in averaged properties such as the distributions of scored goals for the home and away teams. It turns out that in particular the tails of the distributions are not well described by independent Bernoulli trials, but rather well modeled by negative binomial or generalized extreme value distributions. To understand this behavior from first principles, we suggest to modify the Bernoulli random process to include a simple component of self-affirmation which seems to describe the data surprisingly well and allows to interpret the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments and found the proposed models to be applicable rather universally. In particular, here we compare men's and women's leagues and the separate German leagues during the cold war times and find some remarkable differences.
An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.
ERIC Educational Resources Information Center
Bockenholt, Ulf
1999-01-01
Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…
Image Analysis of a Negatively Curved Graphitic Sheet Model for Amorphous Carbon
NASA Astrophysics Data System (ADS)
Bursill, L. A.; Bourgeois, Laure N.
High-resolution electron micrographs are presented which show essentially curved single sheets of graphitic carbon. Image calculations are then presented for the random surface schwarzite-related model of Townsend et al. (Phys. Rev. Lett. 69, 921-924, 1992). Comparison with experimental images does not rule out the contention that such models, containing surfaces of negative curvature, may be useful for predicting some physical properties of specific forms of nanoporous carbon. Some difficulties of the model predictions, when compared with the experimental images, are pointed out. The range of application of this model, as well as competing models, is discussed briefly.
Some considerations for excess zeroes in substance abuse research.
Bandyopadhyay, Dipankar; DeSantis, Stacia M; Korte, Jeffrey E; Brady, Kathleen T
2011-09-01
Count data collected in substance abuse research often come with an excess of "zeroes," which are typically handled using zero-inflated regression models. However, there is a need to consider the design aspects of those studies before using such a statistical model to ascertain the sources of zeroes. We sought to illustrate hurdle models as alternatives to zero-inflated models to validate a two-stage decision-making process in situations of "excess zeroes." We use data from a study of 45 cocaine-dependent subjects where the primary scientific question was to evaluate whether study participation influences drug-seeking behavior. The outcome, "the frequency (count) of cocaine use days per week," is bounded (ranging from 0 to 7). We fit and compare binomial, Poisson, negative binomial, and the hurdle version of these models to study the effect of gender, age, time, and study participation on cocaine use. The hurdle binomial model provides the best fit. Gender and time are not predictive of use. Higher odds of use versus no use are associated with age; however once use is experienced, odds of further use decrease with increase in age. Participation was associated with higher odds of no-cocaine use; once there is use, participation reduced the odds of further use. Age and study participation are significantly predictive of cocaine-use behavior. The two-stage decision process as modeled by a hurdle binomial model (appropriate for bounded count data with excess zeroes) provides interesting insights into the study of covariate effects on count responses of substance use, when all enrolled subjects are believed to be "at-risk" of use.
Emotional Intelligence and Negative Feelings: A Gender Specific Moderated Mediation Model
ERIC Educational Resources Information Center
Karakus, Mehmet
2013-01-01
This study aims to clarify the effect of emotional intelligence (EI) on negative feelings (stress, anxiety, burnout and depression) in a gender specific model. Four hundred and twenty-five primary school teachers (326 males, 99 females) completed the measures of EI, stress, anxiety, burnout and depression. The multi-group analysis was performed…
McGraw, Benjamin A; Koppenhöfer, Albrecht M
2009-06-01
Binomial sequential sampling plans were developed to forecast weevil Listronotus maculicollis Kirby (Coleoptera: Curculionidae), larval damage to golf course turfgrass and aid in the development of integrated pest management programs for the weevil. Populations of emerging overwintered adults were sampled over a 2-yr period to determine the relationship between adult counts, larval density, and turfgrass damage. Larval density and composition of preferred host plants (Poa annua L.) significantly affected the expression of turfgrass damage. Multiple regression indicates that damage may occur in moderately mixed P. annua stands with as few as 10 larvae per 0.09 m2. However, > 150 larvae were required before damage became apparent in pure Agrostis stolonifera L. plots. Adult counts during peaks in emergence as well as cumulative counts across the emergence period were significantly correlated to future densities of larvae. Eight binomial sequential sampling plans based on two tally thresholds for classifying infestation (T = 1 and two adults) and four adult density thresholds (0.5, 0.85, 1.15, and 1.35 per 3.34 m2) were developed to forecast the likelihood of turfgrass damage by using adult counts during peak emergence. Resampling for validation of sample plans software was used to validate sampling plans with field-collected data sets. All sampling plans were found to deliver accurate classifications (correct decisions were made between 84.4 and 96.8%) in a practical timeframe (average sampling cost < 22.7 min).
NASA Astrophysics Data System (ADS)
Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye
2016-10-01
With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).
Models for Cometary Comae Containing Negative Ions
NASA Technical Reports Server (NTRS)
Cordiner, M. A.; Charnley, S. B.
2012-01-01
The presence of negative ions (anions) in cometary comae is known from Giotto mass spectrometry of IP/Halley. The anions O(-), OH(-), C(-), CH(-) and CN(-) have been detected, as well as unidentified anions with masses 22-65 and 85-110 amu [I]. Organic molecular anions such as C4H(-) and C6H(-) are known to have a significant impact on the charge balance of interstellar clouds and circumstellar envelopes and have been shown to act as catalysts for the gas phase synthesis of larger hydrocarbon molecules in the ISM, but their importance in cometary comae has not yet been fully explored. We present details of our new models for the chemistry of cometary comae that include atomic and molecular anions. We calculate the impact of these anions on the charge balance and examine their importance for cometary coma chemistry.
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial
NASA Astrophysics Data System (ADS)
Spaggiari, Andrea; Dragoni, Eugenio; Tuissi, Ausonio
2014-07-01
This work aims at the experimental characterization and modeling validation of shape memory alloy (SMA) Negator springs. According to the classic engineering books on springs, a Negator spring is a spiral spring made of strip of metal wound on the flat with an inherent curvature such that, in repose, each coil wraps tightly on its inner neighbor. The main feature of a Negator springs is the nearly constant force displacement behavior in the unwinding of the strip. Moreover the stroke is very long, theoretically infinite, as it depends only on the length of the initial strip. A Negator spring made in SMA is built and experimentally tested to demonstrate the feasibility of this actuator. The shape memory Negator spring behavior can be modeled with an analytical procedure, which is in good agreement with the experimental test and can be used for design purposes. In both cases, the material is modeled as elastic in austenitic range, while an exponential continuum law is used to describe the martensitic behavior. The experimental results confirms the applicability of this kind of geometry to the shape memory alloy actuators, and the analytical model is confirmed to be a powerful design tool to dimension and predict the spring behavior both in martensitic and austenitic range.
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
ERIC Educational Resources Information Center
Sevigny, Eric L.; Zhang, Gary
2018-01-01
This study investigates how barriers to school-based crime prevention programming moderate the effects of situational crime prevention (SCP) policies on levels of violent crime in U.S. public high schools. Using data from the 2008 School Survey on Crime and Safety, we estimate a series of negative binomial regression models with interactions to…
A statistical model of false negative and false positive detection of phase singularities.
Jacquemet, Vincent
2017-10-01
The complexity of cardiac fibrillation dynamics can be assessed by analyzing the distribution of phase singularities (PSs) observed using mapping systems. Interelectrode distance, however, limits the accuracy of PS detection. To investigate in a theoretical framework the PS false negative and false positive rates in relation to the characteristics of the mapping system and fibrillation dynamics, we propose a statistical model of phase maps with controllable number and locations of PSs. In this model, phase maps are generated from randomly distributed PSs with physiologically-plausible directions of rotation. Noise and distortion of the phase are added. PSs are detected using topological charge contour integrals on regular grids of varying resolutions. Over 100 × 10 6 realizations of the random field process are used to estimate average false negative and false positive rates using a Monte-Carlo approach. The false detection rates are shown to depend on the average distance between neighboring PSs expressed in units of interelectrode distance, following approximately a power law with exponents in the range of 1.14 to 2 for false negatives and around 2.8 for false positives. In the presence of noise or distortion of phase, false detection rates at high resolution tend to a non-zero noise-dependent lower bound. This model provides an easy-to-implement tool for benchmarking PS detection algorithms over a broad range of configurations with multiple PSs.
Stamm, John W.; Long, D. Leann; Kincade, Megan E.
2012-01-01
Over the past five to ten years, zero-inflated count regression models have been increasingly applied to the analysis of dental caries indices (e.g., DMFT, dfms, etc). The main reason for that is linked to the broad decline in children’s caries experience, such that dmf and DMF indices more frequently generate low or even zero counts. This article specifically reviews the application of zero-inflated Poisson and zero-inflated negative binomial regression models to dental caries, with emphasis on the description of the models and the interpretation of fitted model results given the study goals. The review finds that interpretations provided in the published caries research are often imprecise or inadvertently misleading, particularly with respect to failing to discriminate between inference for the class of susceptible persons defined by such models and inference for the sampled population in terms of overall exposure effects. Recommendations are provided to enhance the use as well as the interpretation and reporting of results of count regression models when applied to epidemiological studies of dental caries. PMID:22710271
Impact of negation salience and cognitive resources on negation during attitude formation.
Boucher, Kathryn L; Rydell, Robert J
2012-10-01
Because of the increased cognitive resources required to process negations, past research has shown that explicit attitude measures are more sensitive to negations than implicit attitude measures. The current work demonstrated that the differential impact of negations on implicit and explicit attitude measures was moderated by (a) the extent to which the negation was made salient and (b) the amount of cognitive resources available during attitude formation. When negations were less visually salient, explicit but not implicit attitude measures reflected the intended valence of the negations. When negations were more visually salient, both explicit and implicit attitude measures reflected the intended valence of the negations, but only when perceivers had ample cognitive resources during encoding. Competing models of negation processing, schema-plus-tag and fusion, were examined to determine how negation salience impacts the processing of negations.
The Role of Implicit Negative Feedback in SLA: Models and Recasts in Japanese and Spanish.
ERIC Educational Resources Information Center
Long, Michael; Inagaki, Shunji; Ortega, Lourdes
1998-01-01
Two experiments were conducted to assess relative utility of models and recasts in second-language (L2) Japanese and Spanish. Using pretest, posttest, control group design, each study provided evidence of adults' ability to learn from implicit negative feedback; in one case, support for notion that reactive implicit negative feedback can be more…
Modeling the Adaptive Role of Negative Signaling in Honey Bee Intraspecific Competition.
Johnson, Brian R; Nieh, James C
2010-11-01
Collective decision making in the social insects often proceeds via feedback cycles based on positive signaling. Negative signals have, however, been found in a few contexts in which costs exist for paying attention to no longer useful information. Here we incorporate new research on the specificity and context of the negative stop signal into an agent based model of honey bee foraging to explore the adaptive basis of negative signaling in the dance language. Our work suggests that the stop signal, by acting as a counterbalance to the waggle dance, allows colonies to rapidly shut down attacks on other colonies. This could be a key adaptation, as the costs of attacking a colony strong enough to defend itself are significant. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s10905-010-9229-5) contains supplementary material, which is available to authorized users.
A quantile count model of water depth constraints on Cape Sable seaside sparrows
Cade, B.S.; Dong, Q.
2008-01-01
1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.
Sentential Negation in English
ERIC Educational Resources Information Center
Mowarin, Macaulay
2009-01-01
This paper undertakes a detailed analysis of sentential negation in the English language with Chomsky's Government-Binding theory of Transformational Grammar as theoretical model. It distinguishes between constituent and sentential negation in English. The essay identifies the exact position of Negation phrase in an English clause structure. It…
Flood, Nicola; Page, Andrew; Hooke, Geoff
2018-05-03
Routine outcome monitoring benefits treatment by identifying potential no change and deterioration. The present study compared two methods of identifying early change and their ability to predict negative outcomes on self-report symptom and wellbeing measures. 1467 voluntary day patients participated in a 10-day group Cognitive Behaviour Therapy (CBT) program and completed the symptom and wellbeing measures daily. Early change, as defined by (a) the clinical significance method and (b) longitudinal modelling, was compared on each measure. Early change, as defined by the simpler clinical significance method, was superior at predicting negative outcomes than longitudinal modelling. The longitudinal modelling method failed to detect a group of deteriorated patients, and agreement between the early change methods and the final unchanged outcome was higher for the clinical significance method. Therapists could use the clinical significance early change method during treatment to alert them of patients at risk for negative outcomes, which in turn could allow therapists to prevent those negative outcomes from occurring.
Modeling of surface-dominated plasmas: from electric thruster to negative ion source.
Taccogna, F; Schneider, R; Longo, S; Capitelli, M
2008-02-01
This contribution shows two important applications of the particle-in-cell/monte Carlo technique on ion sources: modeling of the Hall thruster SPT-100 for space propulsion and of the rf negative ion source for ITER neutral beam injection. In the first case translational degrees of freedom are involved, while in the second case inner degrees of freedom (vibrational levels) are excited. Computational results show how in both cases, plasma-wall and gas-wall interactions play a dominant role. These are secondary electron emission from the lateral ceramic wall of SPT-100 and electron capture from caesiated surfaces by positive ions and atoms in the rf negative ion source.
Mental health status and healthcare utilization among community dwelling older adults.
Adepoju, Omolola; Lin, Szu-Hsuan; Mileski, Michael; Kruse, Clemens Scott; Mask, Andrew
2018-04-27
Shifts in mental health utilization patterns are necessary to allow for meaningful access to care for vulnerable populations. There have been long standing issues in how mental health is provided, which has caused problems in that care being efficacious for those seeking it. To assess the relationship between mental health status and healthcare utilization among adults ≥65 years. A negative binomial regression model was used to assess the relationship between mental health status and healthcare utilization related to office-based physician visits, while a two-part model, consisting of logistic regression and negative binomial regression, was used to separately model emergency visits and inpatient services. The receipt of care in office-based settings were marginally higher for subjects with mental health difficulties. Both probabilities and counts of inpatient hospitalizations were similar across mental health categories. The count of ER visits was similar across mental health categories; however, the probability of having an emergency department visit was marginally higher for older adults who reported mental health difficulties in 2012. These findings are encouraging and lend promise to the recent initiatives on addressing gaps in mental healthcare services.
Park, Annie D.; Farrahi, Layla N.; Pang, Raina D.; Guillot, Casey R.; Aguirre, Claudia G.; Leventhal, Adam M.
2016-01-01
Objective: Negative urgency—the tendency to act rashly during negative affective states—is a risk factor for regular cigarette smoking. This human laboratory study tested a novel theoretical model of the underlying mechanisms linking negative urgency and smoking motivation, which purports that smokers with high negative urgency are at increased susceptibility to abstinence-induced increases in negative affect, which, in turn, provokes the urge to smoke to suppress negative affect. Method: Smokers (N = 180, >10 cigarettes/day) attended a baseline session at which they completed self-report measures of negative urgency and other co-factors and subsequently attended two counterbalanced within-subject experimental sessions (i.e., 16 hours of smoking abstinence or smoking as usual). At both experimental sessions, self-reported tobacco withdrawal symptoms, affect, and smoking urge were assessed. Results: Negative urgency was associated with larger abstinence-induced increases in tobacco withdrawal symptoms, negative affect, and urge to smoke to alleviate negative affect, both with and without controlling for anxiety, depression, tobacco dependence, and sensation seeking (βs > .18, ps < .05). The association between negative urgency and abstinence-induced increases in urge to smoke to alleviate negative affect was mediated by greater abstinence-induced increases in negative affect (βs > .062, ps = .01). Conclusions: These results provide initial support of this model by providing evidence that smokers with higher (vs. lower) negative urgency may be more prone to greater negative affect during withdrawal, which in turn may promote urge to smoke to suppress negative emotion. Research extending this model to other settings, measures, and methodological approaches may be fruitful. PMID:27588535
Park, Annie D; Farrahi, Layla N; Pang, Raina D; Guillot, Casey R; Aguirre, Claudia G; Leventhal, Adam M
2016-09-01
Negative urgency-the tendency to act rashly during negative affective states-is a risk factor for regular cigarette smoking. This human laboratory study tested a novel theoretical model of the underlying mechanisms linking negative urgency and smoking motivation, which purports that smokers with high negative urgency are at increased susceptibility to abstinence-induced increases in negative affect, which, in turn, provokes the urge to smoke to suppress negative affect. Smokers (N = 180, >10 cigarettes/day) attended a baseline session at which they completed self-report measures of negative urgency and other co-factors and subsequently attended two counterbalanced within-subject experimental sessions (i.e., 16 hours of smoking abstinence or smoking as usual). At both experimental sessions, self-reported tobacco withdrawal symptoms, affect, and smoking urge were assessed. Negative urgency was associated with larger abstinence-induced increases in tobacco withdrawal symptoms, negative affect, and urge to smoke to alleviate negative affect, both with and without controlling for anxiety, depression, tobacco dependence, and sensation seeking (βs > .18, ps < .05). The association between negative urgency and abstinence-induced increases in urge to smoke to alleviate negative affect was mediated by greater abstinence-induced increases in negative affect (βs > .062, ps = .01). These results provide initial support of this model by providing evidence that smokers with higher (vs. lower) negative urgency may be more prone to greater negative affect during withdrawal, which in turn may promote urge to smoke to suppress negative emotion. Research extending this model to other settings, measures, and methodological approaches may be fruitful.
Wilbaux, M; Tod, M; De Bono, J; Lorente, D; Mateo, J; Freyer, G; You, B; Hénin, E
2015-01-01
Assessment of treatment efficacy in metastatic castration-resistant prostate cancer (mCRPC) is limited by frequent nonmeasurable bone metastases. The count of circulating tumor cells (CTCs) is a promising surrogate marker that may replace the widely used prostate-specific antigen (PSA). The purpose of this study was to quantify the dynamic relationships between the longitudinal kinetics of these markers during treatment in patients with mCRPC. Data from 223 patients with mCRPC treated by chemotherapy and/or hormonotherapy were analyzed for up to 6 months of treatment. A semimechanistic model was built, combining the following several pharmacometric advanced features: (1) Kinetic-Pharmacodynamic (K-PD) compartments for treatments (chemotherapy and hormonotherapy); (2) a latent variable linking both marker kinetics; (3) modeling of CTC kinetics with a cell lifespan model; and (4) a negative binomial distribution for the CTC random sampling. Linked with survival, this model would potentially be useful for predicting treatment efficacy during drug development or for therapeutic adjustment in treated patients. PMID:26225253
Technical and biological variance structure in mRNA-Seq data: life in the real world
2012-01-01
Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017
Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...
Is the negative glow plasma of a direct current glow discharge negatively charged?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogdanov, E. A.; Saifutdinov, A. I.; Demidov, V. I., E-mail: Vladimir.Demidov@mail.wvu.edu
A classic problem in gas discharge physics is discussed: what is the sign of charge density in the negative glow region of a glow discharge? It is shown that traditional interpretations in text-books on gas discharge physics that states a negative charge of the negative glow plasma are based on analogies with a simple one-dimensional model of discharge. Because the real glow discharges with a positive column are always two-dimensional, the transversal (radial) term in divergence with the electric field can provide a non-monotonic axial profile of charge density in the plasma, while maintaining a positive sign. The numerical calculationmore » of glow discharge is presented, showing a positive space charge in the negative glow under conditions, where a one-dimensional model of the discharge would predict a negative space charge.« less
Lincoln, Karen D.; Taylor, Robert Joseph; Bullard, Kai McKeever; Chatters, Linda M.; Himle, Joseph A.; Woodward, Amanda Toler; Jackson, James S.
2010-01-01
Objectives Both emotional support and negative interaction with family members have been linked to mental health. However, few studies have examined the associations between emotional support and negative interaction and psychiatric disorders in late life. This study investigated the relationship between emotional support and negative interaction on lifetime prevalence of mood and anxiety disorders among older African Americans. Design The analyses utilized the National Survey of American Life. Methods Logistic regression and negative binomial regression analyses were used to examine the effect of emotional support and negative interaction with family members on the prevalence of lifetime DSM-IV mood and anxiety disorders. Participants Data from 786 African Americans aged 55 years and older were used. Measurement The DSM-IV World Mental Health Composite International Diagnostic Interview (WMH-CIDI) was used to assess mental disorders. Three dependent variables were investigated: the prevalence of lifetime mood disorders, the prevalence of lifetime anxiety disorders, and the total number of lifetime mood and anxiety disorders. Results Multivariate analysis found that emotional support was not associated with any of the three dependent variables. Negative interaction was significantly and positively associated with the odds of having a lifetime mood disorder, a lifetime anxiety disorder and the number of lifetime mood and anxiety disorders. Conclusions This is the first study to investigate the relationship between emotional support, negative interaction with family members and psychiatric disorders among older African Americans. Negative interaction was a risk factor for mood and anxiety disorders among older African Americans, whereas emotional support was not significant. PMID:20157904
Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie
2016-03-01
In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution
NASA Astrophysics Data System (ADS)
Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein
2018-07-01
One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.
On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution
NASA Astrophysics Data System (ADS)
Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein
2017-12-01
One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.
Investigating Individual Differences in Toddler Search with Mixture Models
ERIC Educational Resources Information Center
Berthier, Neil E.; Boucher, Kelsea; Weisner, Nina
2015-01-01
Children's performance on cognitive tasks is often described in categorical terms in that a child is described as either passing or failing a test, or knowing or not knowing some concept. We used binomial mixture models to determine whether individual children could be classified as passing or failing two search tasks, the DeLoache model room…
Comparison of measured and modelled negative hydrogen ion densities at the ECR-discharge HOMER
NASA Astrophysics Data System (ADS)
Rauner, D.; Kurutz, U.; Fantz, U.
2015-04-01
As the negative hydrogen ion density nH- is a key parameter for the investigation of negative ion sources, its diagnostic quantification is essential in source development and operation as well as for fundamental research. By utilizing the photodetachment process of negative ions, generally two different diagnostic methods can be applied: via laser photodetachment, the density of negative ions is measured locally, but only relatively to the electron density. To obtain absolute densities, the electron density has to be measured additionally, which induces further uncertainties. Via cavity ring-down spectroscopy (CRDS), the absolute density of H- is measured directly, however LOS-averaged over the plasma length. At the ECR-discharge HOMER, where H- is produced in the plasma volume, laser photodetachment is applied as the standard method to measure nH-. The additional application of CRDS provides the possibility to directly obtain absolute values of nH-, thereby successfully bench-marking the laser photodetachment system as both diagnostics are in good agreement. In the investigated pressure range from 0.3 to 3 Pa, the measured negative hydrogen ion density shows a maximum at 1 to 1.5 Pa and an approximately linear response to increasing input microwave powers from 200 up to 500 W. Additionally, the volume production of negative ions is 0-dimensionally modelled by balancing H- production and destruction processes. The modelled densities are adapted to the absolute measurements of nH- via CRDS, allowing to identify collisions of H- with hydrogen atoms (associative and non-associative detachment) to be the dominant loss process of H- in the plasma volume at HOMER. Furthermore, the characteristic peak of nH- observed at 1 to 1.5 Pa is identified to be caused by a comparable behaviour of the electron density with varying pressure, as ne determines the volume production rate via dissociative electron attachment to vibrationally excited hydrogen molecules.
Davies, Patrick T; Coe, Jesse L; Hentges, Rochelle F; Sturge-Apple, Melissa L; van der Kloet, Erika
2018-03-01
This study examined the transactional interplay among children's negative family representations, visual processing of negative emotions, and externalizing symptoms in a sample of 243 preschool children (M age = 4.60 years). Children participated in three annual measurement occasions. Cross-lagged autoregressive models were conducted with multimethod, multi-informant data to identify mediational pathways. Consistent with schema-based top-down models, negative family representations were associated with attention to negative faces in an eye-tracking task and their externalizing symptoms. Children's negative representations of family relationships specifically predicted decreases in their attention to negative emotions, which, in turn, was associated with subsequent increases in their externalizing symptoms. Follow-up analyses indicated that the mediational role of diminished attention to negative emotions was particularly pronounced for angry faces. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Diffusion Modelling Reveals the Decision Making Processes Underlying Negative Judgement Bias in Rats
Hales, Claire A.; Robinson, Emma S. J.; Houghton, Conor J.
2016-01-01
Human decision making is modified by emotional state. Rodents exhibit similar biases during interpretation of ambiguous cues that can be altered by affective state manipulations. In this study, the impact of negative affective state on judgement bias in rats was measured using an ambiguous-cue interpretation task. Acute treatment with an anxiogenic drug (FG7142), and chronic restraint stress and social isolation both induced a bias towards more negative interpretation of the ambiguous cue. The diffusion model was fit to behavioural data to allow further analysis of the underlying decision making processes. To uncover the way in which parameters vary together in relation to affective state manipulations, independent component analysis was conducted on rate of information accumulation and distances to decision threshold parameters for control data. Results from this analysis were applied to parameters from negative affective state manipulations. These projected components were compared to control components to reveal the changes in decision making processes that are due to affective state manipulations. Negative affective bias in rodents induced by either FG7142 or chronic stress is due to a combination of more negative interpretation of the ambiguous cue, reduced anticipation of the high reward and increased anticipation of the low reward. PMID:27023442
Hales, Claire A; Robinson, Emma S J; Houghton, Conor J
2016-01-01
Human decision making is modified by emotional state. Rodents exhibit similar biases during interpretation of ambiguous cues that can be altered by affective state manipulations. In this study, the impact of negative affective state on judgement bias in rats was measured using an ambiguous-cue interpretation task. Acute treatment with an anxiogenic drug (FG7142), and chronic restraint stress and social isolation both induced a bias towards more negative interpretation of the ambiguous cue. The diffusion model was fit to behavioural data to allow further analysis of the underlying decision making processes. To uncover the way in which parameters vary together in relation to affective state manipulations, independent component analysis was conducted on rate of information accumulation and distances to decision threshold parameters for control data. Results from this analysis were applied to parameters from negative affective state manipulations. These projected components were compared to control components to reveal the changes in decision making processes that are due to affective state manipulations. Negative affective bias in rodents induced by either FG7142 or chronic stress is due to a combination of more negative interpretation of the ambiguous cue, reduced anticipation of the high reward and increased anticipation of the low reward.
Stress enhances model-free reinforcement learning only after negative outcome
Lee, Daeyeol
2017-01-01
Previous studies found that stress shifts behavioral control by promoting habits while decreasing goal-directed behaviors during reward-based decision-making. It is, however, unclear how stress disrupts the relative contribution of the two systems controlling reward-seeking behavior, i.e. model-free (or habit) and model-based (or goal-directed). Here, we investigated whether stress biases the contribution of model-free and model-based reinforcement learning processes differently depending on the valence of outcome, and whether stress alters the learning rate, i.e., how quickly information from the new environment is incorporated into choices. Participants were randomly assigned to either a stress or a control condition, and performed a two-stage Markov decision-making task in which the reward probabilities underwent periodic reversals without notice. We found that stress increased the contribution of model-free reinforcement learning only after negative outcome. Furthermore, stress decreased the learning rate. The results suggest that stress diminishes one’s ability to make adaptive choices in multiple aspects of reinforcement learning. This finding has implications for understanding how stress facilitates maladaptive habits, such as addictive behavior, and other dysfunctional behaviors associated with stress in clinical and educational contexts. PMID:28723943
Stress enhances model-free reinforcement learning only after negative outcome.
Park, Heyeon; Lee, Daeyeol; Chey, Jeanyung
2017-01-01
Previous studies found that stress shifts behavioral control by promoting habits while decreasing goal-directed behaviors during reward-based decision-making. It is, however, unclear how stress disrupts the relative contribution of the two systems controlling reward-seeking behavior, i.e. model-free (or habit) and model-based (or goal-directed). Here, we investigated whether stress biases the contribution of model-free and model-based reinforcement learning processes differently depending on the valence of outcome, and whether stress alters the learning rate, i.e., how quickly information from the new environment is incorporated into choices. Participants were randomly assigned to either a stress or a control condition, and performed a two-stage Markov decision-making task in which the reward probabilities underwent periodic reversals without notice. We found that stress increased the contribution of model-free reinforcement learning only after negative outcome. Furthermore, stress decreased the learning rate. The results suggest that stress diminishes one's ability to make adaptive choices in multiple aspects of reinforcement learning. This finding has implications for understanding how stress facilitates maladaptive habits, such as addictive behavior, and other dysfunctional behaviors associated with stress in clinical and educational contexts.
Population heterogeneity in the salience of multiple risk factors for adolescent delinquency.
Lanza, Stephanie T; Cooper, Brittany R; Bray, Bethany C
2014-03-01
To present mixture regression analysis as an alternative to more standard regression analysis for predicting adolescent delinquency. We demonstrate how mixture regression analysis allows for the identification of population subgroups defined by the salience of multiple risk factors. We identified population subgroups (i.e., latent classes) of individuals based on their coefficients in a regression model predicting adolescent delinquency from eight previously established risk indices drawn from the community, school, family, peer, and individual levels. The study included N = 37,763 10th-grade adolescents who participated in the Communities That Care Youth Survey. Standard, zero-inflated, and mixture Poisson and negative binomial regression models were considered. Standard and mixture negative binomial regression models were selected as optimal. The five-class regression model was interpreted based on the class-specific regression coefficients, indicating that risk factors had varying salience across classes of adolescents. Standard regression showed that all risk factors were significantly associated with delinquency. Mixture regression provided more nuanced information, suggesting a unique set of risk factors that were salient for different subgroups of adolescents. Implications for the design of subgroup-specific interventions are discussed. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Chen, Chen; Xie, Yuanchang
2016-06-01
Annual Average Daily Traffic (AADT) is often considered as a main covariate for predicting crash frequencies at urban and suburban intersections. A linear functional form is typically assumed for the Safety Performance Function (SPF) to describe the relationship between the natural logarithm of expected crash frequency and covariates derived from AADTs. Such a linearity assumption has been questioned by many researchers. This study applies Generalized Additive Models (GAMs) and Piecewise Linear Negative Binomial (PLNB) regression models to fit intersection crash data. Various covariates derived from minor-and major-approach AADTs are considered. Three different dependent variables are modeled, which are total multiple-vehicle crashes, rear-end crashes, and angle crashes. The modeling results suggest that a nonlinear functional form may be more appropriate. Also, the results show that it is important to take into consideration the joint safety effects of multiple covariates. Additionally, it is found that the ratio of minor to major-approach AADT has a varying impact on intersection safety and deserves further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Heberling, Matthew T.; Templeton, Joshua J.
2009-04-01
We estimate an individual travel cost model for Great Sand Dunes National Park and Preserve (GSD) in Colorado using on-site, secondary data. The purpose of the on-site survey was to help the National Park Service better understand the visitors of GSD; it was not intended for a travel cost model. Variables such as travel cost and income were estimated based on respondents’ Zip Codes. Following approaches found in the literature, a negative binomial model corrected for truncation and endogenous stratification fit the data the best. We estimate a recreational benefit of U.S. 89/visitor/year or U.S. 54/visitor/24-h recreational day (in 2002 U.S. ). Based on the approach presented here, there are other data sets for national parks, preserves, and battlefields where travel cost models could be estimated and used to support National Park Service management decisions.
Creswell, Kasey G.; Chung, Tammy; Wright, Aidan G. C.; Clark, Duncan B.; Black, Jessica J.; Martin, Christopher S.
2015-01-01
Aims This study examined the personality traits of negative emotionality and constraint and the ability to resist drinking during negative affective states as correlates of solitary drinking in adolescence. We hypothesized that higher levels of negative emotionality and lower levels of constraint would predict solitary drinking and that these relationships would be mediated by the ability to resist drinking in response to negative emotions. Design Structural equation modeling was used to fit a path model from the personality traits of negative emotionality and constraint to solitary drinking status through intermediate effects on the ability to resist drinking during negative emotions using cross-sectional data. Setting Clinical and community settings in Pennsylvania, USA. Participants The sample included 761 adolescent drinkers (mean age = 17.1). Measurements Adolescents completed the Lifetime Drinking History, the Multidimensional Personality Questionnaire, the Constructive Thinking Inventory and the Situational Confidence Questionnaire. Findings The path model provided a good fit to the data. The association between trait negative emotionality and solitary drinking was fully mediated by adolescents' ability to resist drinking during negative affective states (b = 0.05, P = 0.01). In contrast, constraint had a direct effect on solitary drinking (odds ratio (OR) = 0.79, b = –0.23, P<0.01), as well as an indirect effect through the ability to resist drinking during negative affective states (b = –0.03, P = 0.02). Conclusions The ability to resist drinking while experiencing negative feelings or emotions may be an important underlying mechanism linking trait negative emotionality (a tendency toward depression, anxiety and poor reaction to stress) and constraint (lack of impulsiveness) to adolescent solitary drinking. PMID:25664806
Creswell, Kasey G; Chung, Tammy; Wright, Aidan G C; Clark, Duncan B; Black, Jessica J; Martin, Christopher S
2015-05-01
This study examined the personality traits of negative emotionality and constraint and the ability to resist drinking during negative affective states as correlates of solitary drinking in adolescence. We hypothesized that higher levels of negative emotionality and lower levels of constraint would predict solitary drinking and that these relationships would be mediated by the ability to resist drinking in response to negative emotions. Structural equation modeling was used to fit a path model from the personality traits of negative emotionality and constraint to solitary drinking status through intermediate effects on the ability to resist drinking during negative emotions using cross-sectional data. Clinical and community settings in Pennsylvania, USA. The sample included 761 adolescent drinkers (mean age = 17.1). Adolescents completed the Lifetime Drinking History, the Multidimensional Personality Questionnaire, the Constructive Thinking Inventory and the Situational Confidence Questionnaire. The path model provided a good fit to the data. The association between trait negative emotionality and solitary drinking was fully mediated by adolescents' ability to resist drinking during negative affective states (b = 0.05, P = 0.01). In contrast, constraint had a direct effect on solitary drinking (odds ratio (OR) = 0.79, b = -0.23, P<0.01), as well as an indirect effect through the ability to resist drinking during negative affective states (b = -0.03, P = 0.02). The ability to resist drinking while experiencing negative feelings or emotions may be an important underlying mechanism linking trait negative emotionality (a tendency toward depression, anxiety and poor reaction to stress) and constraint (lack of impulsiveness) to adolescent solitary drinking. © 2015 Society for the Study of Addiction.
Chen, Ping; Harrington, Peter B
2008-02-01
A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).
A semi-nonparametric Poisson regression model for analyzing motor vehicle crash data.
Ye, Xin; Wang, Ke; Zou, Yajie; Lord, Dominique
2018-01-01
This paper develops a semi-nonparametric Poisson regression model to analyze motor vehicle crash frequency data collected from rural multilane highway segments in California, US. Motor vehicle crash frequency on rural highway is a topic of interest in the area of transportation safety due to higher driving speeds and the resultant severity level. Unlike the traditional Negative Binomial (NB) model, the semi-nonparametric Poisson regression model can accommodate an unobserved heterogeneity following a highly flexible semi-nonparametric (SNP) distribution. Simulation experiments are conducted to demonstrate that the SNP distribution can well mimic a large family of distributions, including normal distributions, log-gamma distributions, bimodal and trimodal distributions. Empirical estimation results show that such flexibility offered by the SNP distribution can greatly improve model precision and the overall goodness-of-fit. The semi-nonparametric distribution can provide a better understanding of crash data structure through its ability to capture potential multimodality in the distribution of unobserved heterogeneity. When estimated coefficients in empirical models are compared, SNP and NB models are found to have a substantially different coefficient for the dummy variable indicating the lane width. The SNP model with better statistical performance suggests that the NB model overestimates the effect of lane width on crash frequency reduction by 83.1%.
Variable selection for distribution-free models for longitudinal zero-inflated count responses.
Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M
2016-07-20
Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Singh, Gyanendra; Sachdeva, S N; Pal, Mahesh
2016-11-01
This work examines the application of M5 model tree and conventionally used fixed/random effect negative binomial (FENB/RENB) regression models for accident prediction on non-urban sections of highway in Haryana (India). Road accident data for a period of 2-6 years on different sections of 8 National and State Highways in Haryana was collected from police records. Data related to road geometry, traffic and road environment related variables was collected through field studies. Total two hundred and twenty two data points were gathered by dividing highways into sections with certain uniform geometric characteristics. For prediction of accident frequencies using fifteen input parameters, two modeling approaches: FENB/RENB regression and M5 model tree were used. Results suggest that both models perform comparably well in terms of correlation coefficient and root mean square error values. M5 model tree provides simple linear equations that are easy to interpret and provide better insight, indicating that this approach can effectively be used as an alternative to RENB approach if the sole purpose is to predict motor vehicle crashes. Sensitivity analysis using M5 model tree also suggests that its results reflect the physical conditions. Both models clearly indicate that to improve safety on Indian highways minor accesses to the highways need to be properly designed and controlled, the service roads to be made functional and dispersion of speeds is to be brought down. Copyright © 2016 Elsevier Ltd. All rights reserved.
Study of negative hydrogen ion beam optics using the 3D3V PIC model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyamoto, K., E-mail: kmiyamot@naruto-u.ac.jp; Nishioka, S.; Goto, I.
The mechanism of negative ion extraction under real conditions with the complex magnetic field is studied by using the 3D PIC simulation code. The extraction region of the negative ion source for the negative ion based neutral beam injection system in fusion reactors is modelled. It is shown that the E x B drift of electrons is caused by the magnetic filter and the electron suppression magnetic field, and the resultant asymmetry of the plasma meniscus. Furthermore, it is indicated that that the asymmetry of the plasma meniscus results in the asymmetry of negative ion beam profile including the beammore » halo. It could be demonstrated theoretically that the E x B drift is not significantly weakened by the elastic collisions of the electrons with neutral particles.« less
NASA Astrophysics Data System (ADS)
Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH
2017-12-01
Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).
Gray, B.R.; Haro, R.J.; Rogala, J.T.; Sauer, J.S.
2005-01-01
1. Macroinvertebrate count data often exhibit nested or hierarchical structure. Examples include multiple measurements along each of a set of streams, and multiple synoptic measurements from each of a set of ponds. With data exhibiting hierarchical structure, outcomes at both sampling (e.g. Within stream) and aggregated (e.g. Stream) scales are often of interest. Unfortunately, methods for modelling hierarchical count data have received little attention in the ecological literature. 2. We demonstrate the use of hierarchical count models using fingernail clam (Family: Sphaeriidae) count data and habitat predictors derived from sampling and aggregated spatial scales. The sampling scale corresponded to that of a standard Ponar grab (0.052 m(2)) and the aggregated scale to impounded and backwater regions within 38-197 km reaches of the Upper Mississippi River. Impounded and backwater regions were resampled annually for 10 years. Consequently, measurements on clams were nested within years. Counts were treated as negative binomial random variates, and means from each resampling event as random departures from the impounded and backwater region grand means. 3. Clam models were improved by the addition of covariates that varied at both the sampling and regional scales. Substrate composition varied at the sampling scale and was associated with model improvements, and reductions (for a given mean) in variance at the sampling scale. Inorganic suspended solids (ISS) levels, measured in the summer preceding sampling, also yielded model improvements and were associated with reductions in variances at the regional rather than sampling scales. ISS levels were negatively associated with mean clam counts. 4. Hierarchical models allow hierarchically structured data to be modelled without ignoring information specific to levels of the hierarchy. In addition, information at each hierarchical level may be modelled as functions of covariates that themselves vary by and within levels. As
Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao
2018-01-01
After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522
Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao
2018-01-01
After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.
A Taxonomic Reduced-Space Pollen Model for Paleoclimate Reconstruction
NASA Astrophysics Data System (ADS)
Wahl, E. R.; Schoelzel, C.
2010-12-01
Paleoenvironmental reconstruction from fossil pollen often attempts to take advantage of the rich taxonomic diversity in such data. Here, a taxonomically "reduced-space" reconstruction model is explored that would be parsimonious in introducing parameters needing to be estimated within a Bayesian Hierarchical Modeling context. This work involves a refinement of the traditional pollen ratio method. This method is useful when one (or a few) dominant pollen type(s) in a region have a strong positive correlation with a climate variable of interest and another (or a few) dominant pollen type(s) have a strong negative correlation. When, e.g., counts of pollen taxa a and b (r >0) are combined with pollen types c and d (r <0) to form ratios of the form (a + b) / (a + b + c + d), an appropriate estimation form is the binomial logistic generalized linear model (GLM). The GLM can readily model this relationship in the forward form, pollen = g(climate), which is more physically realistic than inverse models often used in paleoclimate reconstruction [climate = f(pollen)]. The specification of the model is: rnum Bin(n,p), where E(r|T) = p = exp(η)/[1+exp(η)], and η = α + β(T); r is the pollen ratio formed as above, rnum is the ratio numerator, n is the ratio denominator (i.e., the sum of pollen counts), the denominator-specific count is (n - rnum), and T is the temperature at each site corresponding to a specific value of r. Ecological and empirical screening identified the model (Spruce+Birch) / (Spruce+Birch+Oak+Hickory) for use in temperate eastern N. America. α and β were estimated using both "traditional" and Bayesian GLM algorithms (in R). Although it includes only four pollen types, the ratio model yields more explained variation ( 80%) in the pollen-temperature relationship of the study region than a 64-taxon modern analog technique (MAT). Thus, the new pollen ratio method represents an information-rich, reduced space data model that can be efficiently employed in
Gold, James M.; Waltz, James A.; Matveeva, Tatyana M.; Kasanova, Zuzana; Strauss, Gregory P.; Herbener, Ellen S.; Collins, Anne G.E.; Frank, Michael J.
2015-01-01
Context Negative symptoms are a core feature of schizophrenia, but their pathophysiology remains unclear. Objective Negative symptoms are defined by the absence of normal function. However, there must be a productive mechanism that leads to this absence. Here, we test a reinforcement learning account suggesting that negative symptoms result from a failure to represent the expected value of rewards coupled with preserved loss avoidance learning. Design Subjects performed a probabilistic reinforcement learning paradigm involving stimulus pairs in which choices resulted in either reward or avoidance of loss. Following training, subjects indicated their valuation of the stimuli in a transfer task. Computational modeling was used to distinguish between alternative accounts of the data. Setting A tertiary care research outpatient clinic. Patients A total of 47 clinically stable patients with a diagnosis of schizophrenia or schizoaffective disorder and 28 healthy volunteers participated. Patients were divided into high and low negative symptom groups. Main Outcome measures 1) The number of choices leading to reward or loss avoidance and 2) performance in the transfer phase. Quantitative fits from three different models were examined. Results High negative symptom patients demonstrated impaired learning from rewards but intact loss avoidance learning, and failed to distinguish rewarding stimuli from loss-avoiding stimuli in the transfer phase. Model fits revealed that high negative symptom patients were better characterized by an “actor-critic” model, learning stimulus-response associations, whereas controls and low negative symptom patients incorporated expected value of their actions (“Q-learning”) into the selection process. Conclusions Negative symptoms are associated with a specific reinforcement learning abnormality: High negative symptoms patients do not represent the expected value of rewards when making decisions but learn to avoid punishments through the
Abar, Caitlin; Abar, Beau; Turrisi, Rob
2009-01-01
This study examined the impact of parental modeled behavior and permissibility of alcohol use in late high school on the alcohol use and experienced negative drinking consequences of college students. Two-hundred ninety college freshmen at a large university were assessed for perceptions of their parents’ permissibility of alcohol use, parents’ alcohol-related behavior, and own experienced negative consequences associated with alcohol use. Results indicate that parental permissibility of alcohol use is a consistent predictor of teen drinking behaviors, which was strongly associated with experienced negative consequences. Parental modeled use of alcohol was also found to be a risk factor, with significant differences being seen across the gender of the parents and teens. Discussion focuses on risk factors and avenues for prevention research. PMID:19398278
Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method
NASA Astrophysics Data System (ADS)
Prahutama, Alan; Sudarno
2018-05-01
The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).
Quantifying the Negative Feedback of Vegetation to Greenhouse Warming: A Modeling Approach
NASA Technical Reports Server (NTRS)
Bounous, L.; Hall, F. G.; Sellers, P. J.; Kumar, A.; Collatz, G. J.; Tucker, C. J.; Imhoff, M. L.
2010-01-01
Several climate models indicate that in a 2 x CO2 environment, temperature and precipitation would increase and runoff would increase faster than precipitation. These models, however, did not allow the vegetation to increase its leaf density as a response to the physiological effects of increased CO2 and consequent changes in climate. Other assessments included these interactions but did not account for the vegetation down-regulation to reduce plant's photosynthetic activity and as such resulted in a weak vegetation negative response. When we combine these interactions in climate simulations with 2 x CO2, the associated increase in precipitation contributes primarily to increase evapotranspiration rather than surface runoff, consistent with observations, and results in an additional cooling effect not fully accounted for in previous simulations with elevated CO2. By accelerating the water cycle, this feedback slows but does not alleviate the projected warming, reducing the land surface warming by 0.6 C. Compared to previous studies, these results imply that long term negative feedback from CO2-induced increases in vegetation density could reduce temperature following a stabilization of CO2 concentration.
Models of Workplace Incivility: The Relationships to Instigated Incivility and Negative Outcomes
2015-01-01
The aim of the study was to investigate workplace incivility as a social process, examining its components and relationships to both instigated incivility and negative outcomes in the form of well-being, job satisfaction, turnover intentions, and sleeping problems. The different components of incivility that were examined were experienced and witnessed incivility from coworkers as well as supervisors. In addition, the organizational factors, social support, control, and job demands, were included in the models. A total of 2871 (2058 women and 813 men) employees who were connected to the Swedish Hotel and Restaurant Workers Union completed an online questionnaire. Overall, the results from structural equation modelling indicate that whereas instigated incivility to a large extent was explained by witnessing coworker incivility, negative outcomes were to a high degree explained by experienced supervisor incivility via mediation through perceived low social support, low control, and high job demands. Unexpectedly, the relationships between incivility (experienced coworker and supervisor incivility, as well as witnessed supervisor incivility) and instigated incivility were moderated by perceived high control and high social support. The results highlight the importance of including different components of workplace incivility and organizational factors in future studies of the area. PMID:26557714
Models of Workplace Incivility: The Relationships to Instigated Incivility and Negative Outcomes.
Holm, Kristoffer; Torkelson, Eva; Bäckström, Martin
2015-01-01
The aim of the study was to investigate workplace incivility as a social process, examining its components and relationships to both instigated incivility and negative outcomes in the form of well-being, job satisfaction, turnover intentions, and sleeping problems. The different components of incivility that were examined were experienced and witnessed incivility from coworkers as well as supervisors. In addition, the organizational factors, social support, control, and job demands, were included in the models. A total of 2871 (2058 women and 813 men) employees who were connected to the Swedish Hotel and Restaurant Workers Union completed an online questionnaire. Overall, the results from structural equation modelling indicate that whereas instigated incivility to a large extent was explained by witnessing coworker incivility, negative outcomes were to a high degree explained by experienced supervisor incivility via mediation through perceived low social support, low control, and high job demands. Unexpectedly, the relationships between incivility (experienced coworker and supervisor incivility, as well as witnessed supervisor incivility) and instigated incivility were moderated by perceived high control and high social support. The results highlight the importance of including different components of workplace incivility and organizational factors in future studies of the area.
Information Filtering Based on Users' Negative Opinions
NASA Astrophysics Data System (ADS)
Guo, Qiang; Li, Yang; Liu, Jian-Guo
2013-05-01
The process of heat conduction (HC) has recently found application in the information filtering [Zhang et al., Phys. Rev. Lett.99, 154301 (2007)], which is of high diversity but low accuracy. The classical HC model predicts users' potential interested objects based on their interesting objects regardless to the negative opinions. In terms of the users' rating scores, we present an improved user-based HC (UHC) information model by taking into account users' positive and negative opinions. Firstly, the objects rated by users are divided into positive and negative categories, then the predicted interesting and dislike object lists are generated by the UHC model. Finally, the recommendation lists are constructed by filtering out the dislike objects from the interesting lists. By implementing the new model based on nine similarity measures, the experimental results for MovieLens and Netflix datasets show that the new model considering negative opinions could greatly enhance the accuracy, measured by the average ranking score, from 0.049 to 0.036 for Netflix and from 0.1025 to 0.0570 for Movielens dataset, reduced by 26.53% and 44.39%, respectively. Since users prefer to give positive ratings rather than negative ones, the negative opinions contain much more information than the positive ones, the negative opinions, therefore, are very important for understanding users' online collective behaviors and improving the performance of HC model.
Wolz, Ines; Granero, Roser; Fernández-Aranda, Fernando
2017-04-01
Food addiction has been widely researched in past years. However, there is a debate on the mechanisms underlying addictive eating and a better understanding of the processes associated to these behaviors is needed. Previous studies have found characteristic psychological correlates of food addiction, such as high negative urgency, emotion regulation difficulties and low self-directedness, in different samples of adults with addictive eating patterns. Still, it seems difficult to disentangle effects independent from general eating disorder psychopathology. Therefore, this study aimed to test a comprehensive model under control of eating disorder severity, in order to find independent predictors of food addiction. 315 patients with eating disorder diagnoses on the binge-eating spectrum were assessed in personality, emotion regulation, negative urgency, eating disorder symptomatology, and food addiction by self-report. Hypothesis-driven structural equation modeling was conducted to test the comprehensive model. The only independent predictor found for food addiction was negative urgency, while self-directedness and emotion regulation predicted negative urgency and were highly related to eating disorder symptomatology, but not to food addiction. Altogether the model suggests that low self-directedness and difficulties in emotion regulation are related to higher eating disorder symptomatology in general. Those patients who, in addition to these traits, tend to act impulsively when in negative mood states, are at risk for developing addictive eating patterns. Urgency-based treatments are therefore recommended for this subgroup of patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Crash Frequency Analysis Using Hurdle Models with Random Effects Considering Short-Term Panel Data
Chen, Feng; Ma, Xiaoxiang; Chen, Suren; Yang, Lin
2016-01-01
Random effect panel data hurdle models are established to research the daily crash frequency on a mountainous section of highway I-70 in Colorado. Road Weather Information System (RWIS) real-time traffic and weather and road surface conditions are merged into the models incorporating road characteristics. The random effect hurdle negative binomial (REHNB) model is developed to study the daily crash frequency along with three other competing models. The proposed model considers the serial correlation of observations, the unbalanced panel-data structure, and dominating zeroes. Based on several statistical tests, the REHNB model is identified as the most appropriate one among four candidate models for a typical mountainous highway. The results show that: (1) the presence of over-dispersion in the short-term crash frequency data is due to both excess zeros and unobserved heterogeneity in the crash data; and (2) the REHNB model is suitable for this type of data. Moreover, time-varying variables including weather conditions, road surface conditions and traffic conditions are found to play importation roles in crash frequency. Besides the methodological advancements, the proposed technology bears great potential for engineering applications to develop short-term crash frequency models by utilizing detailed data from field monitoring data such as RWIS, which is becoming more accessible around the world. PMID:27792209
Gaillard, F O; Boudin, C; Chau, N P; Robert, V; Pichon, G
2003-11-01
Previous experimental gametocyte infections of Anopheles arabiensis on 3 volunteers naturally infected with Plasmodium falciparum were conducted in Senegal. They showed that gametocyte counts in the mosquitoes are, like macroparasite intakes, heterogeneous (overdispersed). They followed a negative binomial distribution, the overdispersion coefficient seeming constant (k = 3.1). To try to explain this heterogeneity, we used an individual-based model (IBM), simulating the behaviour of gametocytes in the human blood circulation and their ingestion by mosquitoes. The hypothesis was that there exists a clustering of the gametocytes in the capillaries. From a series of simulations, in the case of clustering the following results were obtained: (i) the distribution of the gametocytes ingested by the mosquitoes followed a negative binomial, (ii) the k coefficient significantly increased with the density of circulating gametocytes. To validate this model result, 2 more experiments were conducted in Cameroon. Pooled experiments showed a distinct density dependency of the k-values. The simulation results and the experimental results were thus in agreement and suggested that an aggregation process at the microscopic level might produce the density-dependent overdispersion at the macroscopic level. Simulations also suggested that the clustering of gametocytes might facilitate fertilization of gametes.
Tremblay, Marlène; Crim, Stacy M; Cole, Dana J; Hoekstra, Robert M; Henao, Olga L; Döpfer, Dörte
2017-10-01
The Foodborne Diseases Active Surveillance Network (FoodNet) is currently using a negative binomial (NB) regression model to estimate temporal changes in the incidence of Campylobacter infection. FoodNet active surveillance in 483 counties collected data on 40,212 Campylobacter cases between years 2004 and 2011. We explored models that disaggregated these data to allow us to account for demographic, geographic, and seasonal factors when examining changes in incidence of Campylobacter infection. We hypothesized that modeling structural zeros and including demographic variables would increase the fit of FoodNet's Campylobacter incidence regression models. Five different models were compared: NB without demographic covariates, NB with demographic covariates, hurdle NB with covariates in the count component only, hurdle NB with covariates in both zero and count components, and zero-inflated NB with covariates in the count component only. Of the models evaluated, the nonzero-augmented NB model with demographic variables provided the best fit. Results suggest that even though zero inflation was not present at this level, individualizing the level of aggregation and using different model structures and predictors per site might be required to correctly distinguish between structural and observational zeros and account for risk factors that vary geographically.
Vigil, Jacob M; Strenth, Chance
2014-06-01
Self-reported opinions and judgments may be more rooted in expressive biases than in cognitive processing biases, and ultimately operate within a broader behavioral style for advertising the capacity - versus the trustworthiness - dimension of human reciprocity potential. Our analyses of facial expression judgments of likely voters are consistent with this thesis, and directly contradict one major prediction from the authors' "negativity-bias" model.
NASA Astrophysics Data System (ADS)
McKean, John R.; Johnson, Donn; Taylor, R. Garth
2003-04-01
An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.
Modelling Trial-by-Trial Changes in the Mismatch Negativity
Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.
2013-01-01
The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989
Subgraph augmented non-negative tensor factorization (SANTF) for modeling clinical narrative text
Xin, Yu; Hochberg, Ephraim; Joshi, Rohit; Uzuner, Ozlem; Szolovits, Peter
2015-01-01
Objective Extracting medical knowledge from electronic medical records requires automated approaches to combat scalability limitations and selection biases. However, existing machine learning approaches are often regarded by clinicians as black boxes. Moreover, training data for these automated approaches at often sparsely annotated at best. The authors target unsupervised learning for modeling clinical narrative text, aiming at improving both accuracy and interpretability. Methods The authors introduce a novel framework named subgraph augmented non-negative tensor factorization (SANTF). In addition to relying on atomic features (e.g., words in clinical narrative text), SANTF automatically mines higher-order features (e.g., relations of lymphoid cells expressing antigens) from clinical narrative text by converting sentences into a graph representation and identifying important subgraphs. The authors compose a tensor using patients, higher-order features, and atomic features as its respective modes. We then apply non-negative tensor factorization to cluster patients, and simultaneously identify latent groups of higher-order features that link to patient clusters, as in clinical guidelines where a panel of immunophenotypic features and laboratory results are used to specify diagnostic criteria. Results and Conclusion SANTF demonstrated over 10% improvement in averaged F-measure on patient clustering compared to widely used non-negative matrix factorization (NMF) and k-means clustering methods. Multiple baselines were established by modeling patient data using patient-by-features matrices with different feature configurations and then performing NMF or k-means to cluster patients. Feature analysis identified latent groups of higher-order features that lead to medical insights. We also found that the latent groups of atomic features help to better correlate the latent groups of higher-order features. PMID:25862765
Random trinomial tree models and vanilla options
NASA Astrophysics Data System (ADS)
Ganikhodjaev, Nasir; Bayram, Kamola
2013-09-01
In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.
Michael, E; Grenfell, B T; Isham, V S; Denham, D A; Bundy, D A
1998-01-22
A striking feature of lymphatic filariasis is the considerable heterogeneity in infection burden observed between hosts, which greatly complicates the analysis of the population dynamics of the disease. Here, we describe the first application of the moment closure equation approach to model the sources and the impact of this heterogeneity for macrofilarial population dynamics. The analysis is based on the closest laboratory equivalent of the life cycle and immunology of infection in humans--cats chronically infected with the filarial nematode Brugia pahangi. Two sets of long-term experiments are analysed: hosts given either single primary infections or given repeat infections. We begin by quantifying changes in the mean and aggregation of adult parasites (inversely measured by the negative binomial parameter, kappa in cohorts of hosts using generalized linear models. We then apply simple stochastic models to interpret observed patterns. The models and empirical data indicate that parasite aggregation tracks the decline in the mean burden with host age in primary infections. Conversely, in repeat infections, aggregation increases as the worm burden declines with experience of infection. The results show that the primary infection variability is consistent with heterogeneities in parasite survival between hosts. By contrast, the models indicate that the reduction in parasite variability with time in repeat infections is most likely due to the 'filtering' effect of a strong, acquired immune response, which gradually acts to remove the initial variability generated by heterogeneities in larval mortality. We discuss this result in terms of the homogenizing effect of host immunity-driven density-dependence on macrofilarial burden in older hosts.
Negative Symptom Dimensions of the Positive and Negative Syndrome Scale Across Geographical Regions
Liharska, Lora; Harvey, Philip D.; Atkins, Alexandra; Ulshen, Daniel; Keefe, Richard S.E.
2017-01-01
Objective: Recognizing the discrete dimensions that underlie negative symptoms in schizophrenia and how these dimensions are understood across localities might result in better understanding and treatment of these symptoms. To this end, the objectives of this study were to 1) identify the Positive and Negative Syndrome Scale negative symptom dimensions of expressive deficits and experiential deficits and 2) analyze performance on these dimensions over 15 geographical regions to determine whether the items defining them manifest similar reliability across these regions. Design: Data were obtained for the baseline Positive and Negative Syndrome Scale visits of 6,889 subjects across 15 geographical regions. Using confirmatory factor analysis, we examined whether a two-factor negative symptom structure that is found in schizophrenia (experiential deficits and expressive deficits) would be replicated in our sample, and using differential item functioning, we tested the degree to which specific items from each negative symptom subfactor performed across geographical regions in comparison with the United States. Results: The two-factor negative symptom solution was replicated in this sample. Most geographical regions showed moderate-to-large differential item functioning for Positive and Negative Syndrome Scale expressive deficit items, especially N3 Poor Rapport, as compared with Positive and Negative Syndrome Scale experiential deficit items, showing that these items might be interpreted or scored differently in different regions. Across countries, except for India, the differential item functioning values did not favor raters in the United States. Conclusion: These results suggest that the Positive and Negative Syndrome Scale negative symptom factor can be better represented by a two-factor model than by a single-factor model. Additionally, the results show significant differences in responses to items representing the Positive and Negative Syndrome Scale expressive
Driven tracer with absolute negative mobility
NASA Astrophysics Data System (ADS)
Cividini, J.; Mukamel, D.; Posch, H. A.
2018-02-01
Instances of negative mobility, where a system responds to a perturbation in a way opposite to naive expectation, have been studied theoretically and experimentally in numerous nonequilibrium systems. In this work we show that absolute negative mobility (ANM), whereby current is produced in a direction opposite to the drive, can occur around equilibrium states. This is demonstrated with a simple one-dimensional lattice model with a driven tracer. We derive analytical predictions in the linear response regime and elucidate the mechanism leading to ANM by studying the high-density limit. We also study numerically a model of hard Brownian disks in a narrow planar channel, for which the lattice model can be viewed as a toy model. We find that the model exhibits negative differential mobility (NDM), but no ANM.
Modeling work zone crash frequency by quantifying measurement errors in work zone length.
Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet
2013-06-01
Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. Copyright © 2013 Elsevier Ltd. All rights reserved.
Numerical modelling of needle-grid electrodes for negative surface corona charging system
NASA Astrophysics Data System (ADS)
Zhuang, Y.; Chen, G.; Rotaru, M.
2011-08-01
Surface potential decay measurement is a simple and low cost tool to examine electrical properties of insulation materials. During the corona charging stage, a needle-grid electrodes system is often used to achieve uniform charge distribution on the surface of the sample. In this paper, a model using COMSOL Multiphysics has been developed to simulate the gas discharge. A well-known hydrodynamic drift-diffusion model was used. The model consists of a set of continuity equations accounting for the movement, generation and loss of charge carriers (electrons, positive and negative ions) coupled with Poisson's equation to take into account the effect of space and surface charges on the electric field. Four models with the grid electrode in different positions and several mesh sizes are compared with a model that only has the needle electrode. The results for impulse current and surface charge density on the sample clearly show the effect of the extra grid electrode with various positions.
[Spatial epidemiological study on malaria epidemics in Hainan province].
Wen, Liang; Shi, Run-He; Fang, Li-Qun; Xu, De-Zhong; Li, Cheng-Yi; Wang, Yong; Yuan, Zheng-Quan; Zhang, Hui
2008-06-01
To better understand the characteristics of spatial distribution of malaria epidemics in Hainan province and to explore the relationship between malaria epidemics and environmental factors, as well to develop prediction model on malaria epidemics. Data on Malaria and meteorological factors were collected in all 19 counties in Hainan province from May to Oct., 2000, and the proportion of land use types of these counties in this period were extracted from digital map of land use in Hainan province. Land surface temperatures (LST) were extracted from MODIS images and elevations of these counties were extracted from DEM of Hainan province. The coefficients of correlation of malaria incidences and these environmental factors were then calculated with SPSS 13.0, and negative binomial regression analysis were done using SAS 9.0. The incidence of malaria showed (1) positive correlations to elevation, proportion of forest land area and grassland area; (2) negative correlations to the proportion of cultivated area, urban and rural residents and to industrial enterprise area, LST; (3) no correlations to meteorological factors, proportion of water area, and unemployed land area. The prediction model of malaria which came from negative binomial regression analysis was: I (monthly, unit: 1/1,000,000) = exp (-1.672-0.399xLST). Spatial distribution of malaria epidemics was associated with some environmental factors, and prediction model of malaria epidemic could be developed with indexes which extracted from satellite remote sensing images.
ERIC Educational Resources Information Center
Magno, Carlo
2010-01-01
The present study investigated the composition of negative affect and its function as inhibitory to thought processes such as self-regulation. Negative affect in the present study were composed of anxiety, worry, thought suppression, and fear of negative evaluation. These four factors were selected based on the criteria of negative affect by…
A new approach to modelling schistosomiasis transmission based on stratified worm burden.
Gurarie, D; King, C H; Wang, X
2010-11-01
Multiple factors affect schistosomiasis transmission in distributed meta-population systems including age, behaviour, and environment. The traditional approach to modelling macroparasite transmission often exploits the 'mean worm burden' (MWB) formulation for human hosts. However, typical worm distribution in humans is overdispersed, and classic models either ignore this characteristic or make ad hoc assumptions about its pattern (e.g., by assuming a negative binomial distribution). Such oversimplifications can give wrong predictions for the impact of control interventions. We propose a new modelling approach to macro-parasite transmission by stratifying human populations according to worm burden, and replacing MWB dynamics with that of 'population strata'. We developed proper calibration procedures for such multi-component systems, based on typical epidemiological and demographic field data, and implemented them using Wolfram Mathematica. Model programming and calibration proved to be straightforward. Our calibrated system provided good agreement with the individual level field data from the Msambweni region of eastern Kenya. The Stratified Worm Burden (SWB) approach offers many advantages, in that it accounts naturally for overdispersion and accommodates other important factors and measures of human infection and demographics. Future work will apply this model and methodology to evaluate innovative control intervention strategies, including expanded drug treatment programmes proposed by the World Health Organization and its partners.
NASA Astrophysics Data System (ADS)
Fubiani, G.; Boeuf, J. P.
2013-11-01
Results from a 3D self-consistent Particle-In-Cell Monte Carlo Collisions (PIC MCC) model of a high power fusion-type negative ion source are presented for the first time. The model is used to calculate the plasma characteristics of the ITER prototype BATMAN ion source developed in Garching. Special emphasis is put on the production of negative ions on the plasma grid surface. The question of the relative roles of the impact of neutral hydrogen atoms and positive ions on the cesiated grid surface has attracted much attention recently and the 3D PIC MCC model is used to address this question. The results show that the production of negative ions by positive ion impact on the plasma grid is small with respect to the production by atomic hydrogen or deuterium bombardment (less than 10%).
Comparison of measured and modelled negative hydrogen ion densities at the ECR-discharge HOMER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauner, D.; Kurutz, U.; Fantz, U.
2015-04-08
As the negative hydrogen ion density n{sub H{sup −}} is a key parameter for the investigation of negative ion sources, its diagnostic quantification is essential in source development and operation as well as for fundamental research. By utilizing the photodetachment process of negative ions, generally two different diagnostic methods can be applied: via laser photodetachment, the density of negative ions is measured locally, but only relatively to the electron density. To obtain absolute densities, the electron density has to be measured additionally, which induces further uncertainties. Via cavity ring-down spectroscopy (CRDS), the absolute density of H{sup −} is measured directly,more » however LOS-averaged over the plasma length. At the ECR-discharge HOMER, where H{sup −} is produced in the plasma volume, laser photodetachment is applied as the standard method to measure n{sub H{sup −}}. The additional application of CRDS provides the possibility to directly obtain absolute values of n{sub H{sup −}}, thereby successfully bench-marking the laser photodetachment system as both diagnostics are in good agreement. In the investigated pressure range from 0.3 to 3 Pa, the measured negative hydrogen ion density shows a maximum at 1 to 1.5 Pa and an approximately linear response to increasing input microwave powers from 200 up to 500 W. Additionally, the volume production of negative ions is 0-dimensionally modelled by balancing H{sup −} production and destruction processes. The modelled densities are adapted to the absolute measurements of n{sub H{sup −}} via CRDS, allowing to identify collisions of H{sup −} with hydrogen atoms (associative and non-associative detachment) to be the dominant loss process of H{sup −} in the plasma volume at HOMER. Furthermore, the characteristic peak of n{sub H{sup −}} observed at 1 to 1.5 Pa is identified to be caused by a comparable behaviour of the electron density with varying pressure, as n{sub e
Rasbash, Jon; Jenkins, Jennifer; O'Connor, Thomas G; Tackett, Jennifer; Reiss, David
2011-03-01
The goal of this study was to investigate individual and relationship influences on expressions of negativity and positivity in families. Parents and adolescents were observed in a round-robin design in a sample of 687 families. Data were analyzed using a multilevel social relations model. In addition, genetic contributions were estimated for actor effects. Children showed higher mean levels of negativity and lower mean levels of positivity as actors than did parents. Mothers were found to express and elicit higher mean levels of positivity and negativity than fathers. Actor effects were much stronger than partner effects, accounting for between 18%-39% of the variance depending on the actor and the outcome. Genetic (35%) and shared environmental (19%) influences explained a substantial proportion of the actor effect variance for negativity. Dyadic reciprocities were lowest in dyads with a high power differential (i.e., parent-child dyads) and highest for dyads with equal power (sibling and marital dyads). (c) 2011 APA, all rights reserved
Stucki, A; Cottagnoud, M; Acosta, F; Egerman, U; Läuffer, J; Cottagnoud, P
2012-02-01
Ceftobiprole medocaril, a new cephalosporin, is highly active against a broad spectrum of Gram-positive and Gram-negative clinical pathogens, including methicillin-resistant Staphylococcus aureus (MRSA) and penicillin-resistant pneumococci. In this study, we tested ceftobiprole against various Gram-negative pathogens in a rabbit meningitis model and determined its penetration into the cerebrospinal fluid (CSF). In this animal model, ceftobiprole produced an antibacterial activity similar to that of cefepime against an Escherichia coli strain, a Klebsiella pneumoniae strain, and a β-lactamase-negative Haemophilus influenzae strain. Against a β-lactamase-positive H. influenzae strain, ceftobiprole was significantly superior. The penetration of ceftobiprole through inflamed meninges reached about 16% of serum levels compared to about 2% of serum levels through uninflamed meninges.
Stucki, A.; Cottagnoud, M.; Acosta, F.; Egerman, U.; Läuffer, J.
2012-01-01
Ceftobiprole medocaril, a new cephalosporin, is highly active against a broad spectrum of Gram-positive and Gram-negative clinical pathogens, including methicillin-resistant Staphylococcus aureus (MRSA) and penicillin-resistant pneumococci. In this study, we tested ceftobiprole against various Gram-negative pathogens in a rabbit meningitis model and determined its penetration into the cerebrospinal fluid (CSF). In this animal model, ceftobiprole produced an antibacterial activity similar to that of cefepime against an Escherichia coli strain, a Klebsiella pneumoniae strain, and a β-lactamase-negative Haemophilus influenzae strain. Against a β-lactamase-positive H. influenzae strain, ceftobiprole was significantly superior. The penetration of ceftobiprole through inflamed meninges reached about 16% of serum levels compared to about 2% of serum levels through uninflamed meninges. PMID:22064544
McConville, Anna; Law, Bradley S.; Mahony, Michael J.
2013-01-01
Habitat modelling and predictive mapping are important tools for conservation planning, particularly for lesser known species such as many insectivorous bats. However, the scale at which modelling is undertaken can affect the predictive accuracy and restrict the use of the model at different scales. We assessed the validity of existing regional-scale habitat models at a local-scale and contrasted the habitat use of two morphologically similar species with differing conservation status (Mormopterus norfolkensis and Mormopterus species 2). We used negative binomial generalised linear models created from indices of activity and environmental variables collected from systematic acoustic surveys. We found that habitat type (based on vegetation community) best explained activity of both species, which were more active in floodplain areas, with most foraging activity recorded in the freshwater wetland habitat type. The threatened M. norfolkensis avoided urban areas, which contrasts with M. species 2 which occurred frequently in urban bushland. We found that the broad habitat types predicted from local-scale models were generally consistent with those from regional-scale models. However, threshold-dependent accuracy measures indicated a poor fit and we advise caution be applied when using the regional models at a fine scale, particularly when the consequences of false negatives or positives are severe. Additionally, our study illustrates that habitat type classifications can be important predictors and we suggest they are more practical for conservation than complex combinations of raw variables, as they are easily communicated to land managers. PMID:23977296
Wierońska, Joanna M; Kłeczek, Natalia; Woźniak, Monika; Gruca, Piotr; Łasoń-Tyburkiewicz, Magdalena; Papp, Mariusz; Brański, Piotr; Burnat, Grzegorz; Pilc, Andrzej
2015-09-01
Diverse preclinical studies exploiting the modulation of the GABAergic and/or glutamatergic system in brain via metabotropic receptors suggest their potential therapeutic utility. GS39783 and CDPPB, positive allosteric modulators of GABAB and mGlu5 receptors, were previously shown to reverse behavioral phenotypes in animal models to mimic selected (predominantly positive) symptoms of schizophrenia. In the present study we investigated the activity of selected GABAB (GS39783 and CGP7930) and mGlu5 (CDPPB) positive allosteric modulators. We focused mainly on the aspects of their efficacy in the models of negative and cognitive symptoms of schizophrenia. We used modified swim test, social interactions (models of negative symptoms) and novel object recognition (model of cognitive disturbances). The activity of the compounds was also tested in haloperidol-induced catalepsy test. The mutual interaction between GABAB/mGlu5 ligands was investigated as well. In the second part of the study, DHPG-induced PI hydrolysis in the presence of GABAB receptor antagonist (SKF97541), and SKF97541-induced inhibition of cAMP formation in the presence of DHPG, was performed. Both mGlu5 and GABAB receptor modulators effectively reversed MK-801-induced deficits in behavioral models of schizophrenia. Moreover, the concomitant administration of sub-effective doses of CDPPB and GS39783 induced a clear antipsychotic-like effect in all the procedures used, except DOI-induced head twitches. The concomitant administration of group I mGlu and GABAB agonists did not displayed any synergistic effects in vitro. Summing up, an activation of both types of receptor may be a promising mechanism for the development of novel antipsychotic drugs, efficacious toward positive, negative and cognitive symptoms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Grennan, J Troy; Loutfy, Mona R; Su, DeSheng; Harrigan, P Richard; Cooper, Curtis; Klein, Marina; Machouf, Nima; Montaner, Julio S G; Rourke, Sean; Tsoukas, Christos; Hogg, Bob; Raboud, Janet
2012-04-15
The importance of human immunodeficiency virus (HIV) blip magnitude on virologic rebound has been raised in clinical guidelines relating to viral load assays. Antiretroviral-naive individuals initiating combination antiretroviral therapy (cART) after 1 January 2000 and achieving virologic suppression were studied. Negative binomial models were used to identify blip correlates. Recurrent event models were used to determine the association between blips and rebound by incorporating multiple periods of virologic suppression per individual. 3550 participants (82% male; median age, 40 years) were included. In a multivariable negative binomial regression model, the Amplicor assay was associated with a lower blip rate than branched DNA (rate ratio, 0.69; P < .01), controlling for age, sex, region, baseline HIV-1 RNA and CD4 count, AIDS-defining illnesses, year of cART initiation, cART type, and HIV-1 RNA testing frequency. In a multivariable recurrent event model controlling for age, sex, intravenous drug use, cART start year, cART type, assay type, and HIV-1 RNA testing frequency, blips of 500-999 copies/mL were associated with virologic rebound (hazard ratio, 2.70; P = .002), whereas blips of 50-499 were not. HIV-1 RNA assay was an important determinant of blip rates and should be considered in clinical guidelines. Blips ≥500 copies/mL were associated with increased rebound risk.
Maraldo, Toni M; Zhou, Wanni; Dowling, Jessica; Vander Wal, Jillon S
2016-12-01
The dual pathway model, a theoretical model of eating disorder development, suggests that thin ideal internalization leads to body dissatisfaction which leads to disordered eating via the dual pathways of negative affect and dietary restraint. While the dual pathway model has been a valuable guide for eating disorder prevention, greater knowledge of characteristics that predict thin ideal internalization is needed. The present study replicated and extended the dual pathway model by considering the addition of fear of negative evaluation, suggestibility, rumination, and self-compassion in a sample of community women and female university students. Results showed that fear of negative evaluation and suggestibility predicted thin ideal internalization whereas rumination and self-compassion (inversely) predicted body dissatisfaction. Negative affect was predicted by fear of negative evaluation, rumination, and self-compassion (inversely). The extended model fit the data well in both samples. Analogue and longitudinal study of these constructs is warranted in future research. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling of negative ion transport in a plasma source
NASA Astrophysics Data System (ADS)
Riz, David; Paméla, Jérôme
1998-08-01
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The ion trajectory is calculated by numerically solving the 3-D motion equation, while the atomic processes of destruction, of elastic collision H-/H+ and of charge exchange H-/H0 are handled at each time step by a Monte-Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have allowed to explain, either quantitatively or qualitatively, several phenomena observed in negative ion sources, such as the isotopic H-/D- effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm-3), negative ions can reach the extraction region provided if they are produced at a distance lower than 2 cm from the plasma grid in the case of «volume production» (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.
Negative Stress Margins - Are They Real?
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Lee, Darlene S.; Mohaghegh, Michael
2011-01-01
Advances in modeling and simulation, new finite element software, modeling engines and powerful computers are providing opportunities to interrogate designs in a very different manner and in a more detailed approach than ever before. Margins of safety are also often evaluated using local stresses for various design concepts and design parameters quickly once analysis models are defined and developed. This paper suggests that not all the negative margins of safety evaluated are real. The structural areas where negative margins are frequently encountered are often near stress concentrations, point loads and load discontinuities, near locations of stress singularities, in areas having large gradients but with insufficient mesh density, in areas with modeling issues and modeling errors, and in areas with connections and interfaces, in two-dimensional (2D) and three-dimensional (3D) transitions, bolts and bolt modeling, and boundary conditions. Now, more than ever, structural analysts need to examine and interrogate their analysis results and perform basic sanity checks to determine if these negative margins are real.
Castrejón-Pérez, Roberto Carlos; Borges-Yáñez, S Aída; Irigoyen-Camacho, Ma Esther; Cruz-Hervert, Luis Pablo
2017-05-01
Oral health in old persons is frequently poor; non-functional prostheses are common and negatively affect quality of life. The objective of this study was to estimate the impact of oral health problems on oral health related quality of life in a sample of home dwelling Mexican elders. Household survey in 655 persons 70 years old and over residing in one county in Mexico City. Oral Health Related Quality of Life (Short version of the Oral Health Impact Profile validated in Mexico-OHIP-14-sp), self-perception of general and oral health, xerostomia, utilization of dental services, utilization and functionality of removable dental prostheses, dental and periodontal conditions, age, gender, marital status, schooling, depression, cognitive impairment and independence in activities of daily living (ADL). A negative binomial regression model was fitted. Mean age was 79.2 ± 7.1 years; 54.2% were women. Mean OHIP-14-Sp score was 6.8 ± 8.7, median was 4. The final model showed that men (RR = 1.30); persons with xerostomia (RR = 1.41); no utilization of removable prostheses (RR = 1.55); utilization of non-functional removable prostheses (RR = 1.69); fair self-perception of general health (RR = 1.34); equal (RR = 1.43) or worse (RR = 2.32) self-perception of oral health compared with persons of the same age; and being dependent for at least one ADL (RR = 1.71) increased the probability of higher scores of the OHIP-14-sp. Age, schooling, depression, cognitive impairment and periodontal conditions showed no association. Oral rehabilitation can improve quality of life, health education and health promotion for the elder and their caregivers may reduce the risk of dental problems. Geriatr Gerontol Int 2017; 17: 744-752. © 2016 Japan Geriatrics Society.
Test of the hopelessness theory of depression: drawing negative inference from negative life events.
Kapçi, E G
1998-04-01
The hopelessness theory of depression, i.e., that drawing negative inference from the occurrence of negative life events culminates in depression, was examined. A total of 34 dysphoric and 36 nondepressed undergraduate students participated in a two-stage prospective study lasting three months. The subjects completed the Beck Depression Inventory and Hopelessness Scale at both sessions and the Life Events Experience List at the second session. It is concluded that the inference of negative characteristics about the self from negative life events, coupled with the experience of negative life events contributes to the development of depression through hopelessness. The findings are discussed in relation to the Abramson, et al. hopelessness model of depression.
Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp
This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and tomore » evaluate the efficiency of the algorithm.« less
High-risk regions and outbreak modelling of tularemia in humans.
Desvars-Larrive, A; Liu, X; Hjertqvist, M; Sjöstedt, A; Johansson, A; Rydén, P
2017-02-01
Sweden reports large and variable numbers of human tularemia cases, but the high-risk regions are anecdotally defined and factors explaining annual variations are poorly understood. Here, high-risk regions were identified by spatial cluster analysis on disease surveillance data for 1984-2012. Negative binomial regression with five previously validated predictors (including predicted mosquito abundance and predictors based on local weather data) was used to model the annual number of tularemia cases within the high-risk regions. Seven high-risk regions were identified with annual incidences of 3·8-44 cases/100 000 inhabitants, accounting for 56·4% of the tularemia cases but only 9·3% of Sweden's population. For all high-risk regions, most cases occurred between July and September. The regression models explained the annual variation of tularemia cases within most high-risk regions and discriminated between years with and without outbreaks. In conclusion, tularemia in Sweden is concentrated in a few high-risk regions and shows high annual and seasonal variations. We present reproducible methods for identifying tularemia high-risk regions and modelling tularemia cases within these regions. The results may help health authorities to target populations at risk and lay the foundation for developing an early warning system for outbreaks.
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Modeling of negative ion transport in a plasma source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riz, David; Departement de Recherches sur la Fusion Controelee CE Cadarache, 13108 St Paul lez Durance; Pamela, Jerome
1998-08-20
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The ion trajectory is calculated by numerically solving the 3-D motion equation, while the atomic processes of destruction, of elastic collision H{sup -}/H{sup +} and of charge exchange H{sup -}/H{sup 0} are handled at each time step by a Monte-Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have allowed to explain, either quantitatively or qualitatively, severalmore » phenomena observed in negative ion sources, such as the isotopic H{sup -}/D{sup -} effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm{sup -3}), negative ions can reach the extraction region provided if they are produced at a distance lower than 2 cm from the plasma grid in the case of 'volume production' (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.« less
NASA Astrophysics Data System (ADS)
Rodríguez Miranda, Á.; Valle Melón, J. M.
2017-02-01
Three-dimensional models with photographic textures have become a usual product for the study and dissemination of elements of heritage. The interest for cultural heritage also includes evolution along time; therefore, apart from the 3D models of the current state, it is interesting to be able to generate models representing how they were in the past. To that end, it is necessary to resort to archive information corresponding to the moments that we want to visualize. This text analyses the possibilities of generating 3D models of surfaces with photographic textures from old collections of analog negatives coming from works of terrestrial stereoscopic photogrammetry of historic buildings. The case studies presented refer to the geometric documentation of a small hermitage (done in 1996) and two sections of a wall (year 2000). The procedure starts with the digitization of the film negatives and the processing of the images generated, after which a combination of different methods for 3D reconstruction and texture wrapping are applied: techniques working simultaneously with several images (such as the algorithms of Structure from Motion - SfM) and single image techniques (such as the reconstruction based on vanishing points). Then, the features of the obtained models are described according to the geometric accuracy, completeness and aesthetic quality. In this way, it is possible to establish the real applicability of the models in order to be useful for the aforementioned historical studies and dissemination purposes. The text also wants to draw attention to the importance of preserving the documentary heritage available in the collections of negatives in archival custody and to the increasing difficulty of using them due to: (1) problems of access and physical conservation, (2) obsolescence of the equipment for scanning and stereoplotting and (3) the fact that the software for processing digitized photographs is discontinued.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Spatio-temporal modelling of dengue fever incidence in Malaysia
NASA Astrophysics Data System (ADS)
Che-Him, Norziha; Ghazali Kamardan, M.; Saifullah Rusiman, Mohd; Sufahani, Suliadi; Mohamad, Mahathir; @ Kamariah Kamaruddin, Nafisah
2018-04-01
Previous studies reported significant relationship between dengue incidence rate (DIR) and both climatic and non-climatic factors. Therefore, this study proposes a generalised additive model (GAM) framework for dengue risk in Malaysia by using both climatic and non-climatic factors. The data used is monthly DIR for 12 states of Malaysia from 2001 to 2009. In this study, we considered an annual trend, seasonal effects, population, population density and lagged DIR, rainfall, temperature, number of rainy days and El Niño-Southern Oscillation (ENSO). The population density is found to be positively related to monthly DIR. There are generally weak relationships between monthly DIR and climate variables. A negative binomial GAM shows that there are statistically significant relationships between DIR with climatic and non-climatic factors. These include mean rainfall and temperature, the number of rainy days, sea surface temperature and the interaction between mean temperature (lag 1 month) and sea surface temperature (lag 6 months). These also apply to DIR (lag 3 months) and population density.
ERIC Educational Resources Information Center
Rast, Philippe; Hofer, Scott M.; Sparks, Catharine
2012-01-01
A mixed effects location scale model was used to model and explain individual differences in within-person variability of negative and positive affect across 7 days (N=178) within a measurement burst design. The data come from undergraduate university students and are pooled from a study that was repeated at two consecutive years. Individual…
Anderson, Ariana E; Marder, Stephen; Reise, Steven P; Savitz, Adam; Salvadore, Giacomo; Fu, Dong Jing; Li, Qingqin; Turkoz, Ibrahim; Han, Carol; Bilder, Robert M
2018-02-06
Common genetic variation spans schizophrenia, schizoaffective and bipolar disorders, but historically, these syndromes have been distinguished categorically. A symptom dimension shared across these syndromes, if such a general factor exists, might provide a clearer target for understanding and treating mental illnesses that share core biological bases. We tested the hypothesis that a bifactor model of the Positive and Negative Syndrome Scale (PANSS), containing 1 general factor and 5 specific factors (positive, negative, disorganized, excited, anxiety), explains the cross-diagnostic structure of symptoms better than the traditional 5-factor model, and examined the extent to which a general factor reflects the overall severity of symptoms spanning diagnoses in 5094 total patients with a diagnosis of schizophrenia, schizoaffective, and bipolar disorder. The bifactor model provided superior fit across diagnoses, and was closer to the "true" model, compared to the traditional 5-factor model (Vuong test; P < .001). The general factor included high loadings on 28 of the 30 PANSS items, omitting symptoms associated with the excitement and anxiety/depression domains. The general factor had highest total loadings on symptoms that are often associated with the positive and disorganization syndromes, but there were also substantial loadings on the negative syndrome thus leading to the interpretation of this factor as reflecting generalized psychosis. A bifactor model derived from the PANSS can provide a stronger framework for measuring cross-diagnostic psychopathology than a 5-factor model, and includes a generalized psychosis dimension shared at least across schizophrenia, schizoaffective, and bipolar disorder. © The Author(s) 2018. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com
[Stress and attitudes toward negative emotions in adolescence].
Ozawa, Eiji
2010-12-01
This study investigated the relationship between stress and attitudes toward negative emotions in adolescents. Adolescent students (N=1500) completed a questionnaire that measured attitudes toward negative emotions, emotional-stress reactions, and stress coping. Analysis of date yielded, two factors of the attitudes toward negative emotions: "Negative feelings about negative emotions" and "Capabilities of switching of negative emotions". In order to examine the theoretical relationships among attitudes toward negative emotions, emotional-stress reactions, and stress coping, a hypothetical model was tested by covariance structure analysis. This model predicted that students who have a high level of attitudes toward negative emotions would report enhanced problem solving which promoted stress coping. The results indicated that "Negative feelings about negative emotions" enhanced avoidable coping, and avoidable coping enhanced stress reactions. "Capabilities of switching of negative emotions" was related to a decrease of avoidable coping. Based on the results from covariance structure analysis and a multiple population analysis, the clinical significance and developmental characteristics were discussed.
A Model for Negative Ion Chemistry in Titan’s Ionosphere
NASA Astrophysics Data System (ADS)
Mukundan, Vrinda; Bhardwaj, Anil
2018-04-01
We developed a one-dimensional photochemical model for the dayside ionosphere of Titan for calculating the density profiles of negative ions under steady-state photochemical equilibrium condition. We concentrated on the T40 flyby of the Cassini orbiter and used the in situ measurements from instruments on board Cassini as input to the model. Using the latest available reaction rate coefficients and dissociative electron attachment cross sections, the densities of 10 anions are calculated. Our study shows CN‑ as the dominant anion, followed by C3N‑, which agrees with the results of previous calculations. We suggest that H‑ could be an important anion in Titan’s ionosphere and is the second most abundant anion at altitudes greater than 1200 km. The main production channel of the major ion CN‑ is the reaction of H‑ with HCN. The H‑ also play a major role in the production of anions C2H‑, C6H‑, and OH‑. We present a comparison of the calculated ion density profiles with the relative density profiles derived using recently reported Cassini CAPS/ELS observations.
Safety models incorporating graph theory based transit indicators.
Quintero, Liliana; Sayed, Tarek; Wahba, Mohamed M
2013-01-01
There is a considerable need for tools to enable the evaluation of the safety of transit networks at the planning stage. One interesting approach for the planning of public transportation systems is the study of networks. Network techniques involve the analysis of systems by viewing them as a graph composed of a set of vertices (nodes) and edges (links). Once the transport system is visualized as a graph, various network properties can be evaluated based on the relationships between the network elements. Several indicators can be calculated including connectivity, coverage, directness and complexity, among others. The main objective of this study is to investigate the relationship between network-based transit indicators and safety. The study develops macro-level collision prediction models that explicitly incorporate transit physical and operational elements and transit network indicators as explanatory variables. Several macro-level (zonal) collision prediction models were developed using a generalized linear regression technique, assuming a negative binomial error structure. The models were grouped into four main themes: transit infrastructure, transit network topology, transit route design, and transit performance and operations. The safety models showed that collisions were significantly associated with transit network properties such as: connectivity, coverage, overlapping degree and the Local Index of Transit Availability. As well, the models showed a significant relationship between collisions and some transit physical and operational attributes such as the number of routes, frequency of routes, bus density, length of bus and 3+ priority lanes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Paige F.B. Ferguson; Michael J. Conroy; Jeffrey Hepinstall-Cymerman; Nigel Yoccoz
2015-01-01
False positive detections, such as species misidentifications, occur in ecological data, although many models do not account for them. Consequently, these models are expected to generate biased inference.The main challenge in an analysis of data with false positives is to distinguish false positive and false negative...
NEVER forget: negative emotional valence enhances recapitulation.
Bowen, Holly J; Kark, Sarah M; Kensinger, Elizabeth A
2018-06-01
A hallmark feature of episodic memory is that of "mental time travel," whereby an individual feels they have returned to a prior moment in time. Cognitive and behavioral neuroscience methods have revealed a neurobiological counterpart: Successful retrieval often is associated with reactivation of a prior brain state. We review the emerging literature on memory reactivation and recapitulation, and we describe evidence for the effects of emotion on these processes. Based on this review, we propose a new model: Negative Emotional Valence Enhances Recapitulation (NEVER). This model diverges from existing models of emotional memory in three key ways. First, it underscores the effects of emotion during retrieval. Second, it stresses the importance of sensory processing to emotional memory. Third, it emphasizes how emotional valence - whether an event is negative or positive - affects the way that information is remembered. The model specifically proposes that, as compared to positive events, negative events both trigger increased encoding of sensory detail and elicit a closer resemblance between the sensory encoding signature and the sensory retrieval signature. The model also proposes that negative valence enhances the reactivation and storage of sensory details over offline periods, leading to a greater divergence between the sensory recapitulation of negative and positive memories over time. Importantly, the model proposes that these valence-based differences occur even when events are equated for arousal, thus rendering an exclusively arousal-based theory of emotional memory insufficient. We conclude by discussing implications of the model and suggesting directions for future research to test the tenets of the model.
Continuous Negative Abdominal Pressure Reduces Ventilator-induced Lung Injury in a Porcine Model.
Yoshida, Takeshi; Engelberts, Doreen; Otulakowski, Gail; Katira, Bhushan; Post, Martin; Ferguson, Niall D; Brochard, Laurent; Amato, Marcelo B P; Kavanagh, Brian P
2018-04-27
In supine patients with acute respiratory distress syndrome, the lung typically partitions into regions of dorsal atelectasis and ventral aeration ("baby lung"). Positive airway pressure is often used to recruit atelectasis, but often overinflates ventral (already aerated) regions. A novel approach to selective recruitment of dorsal atelectasis is by "continuous negative abdominal pressure." A randomized laboratory study was performed in anesthetized pigs. Lung injury was induced by surfactant lavage followed by 1 h of injurious mechanical ventilation. Randomization (five pigs in each group) was to positive end-expiratory pressure (PEEP) alone or PEEP with continuous negative abdominal pressure (-5 cm H2O via a plexiglass chamber enclosing hindlimbs, pelvis, and abdomen), followed by 4 h of injurious ventilation (high tidal volume, 20 ml/kg; low expiratory transpulmonary pressure, -3 cm H2O). The level of PEEP at the start was ≈7 (vs. ≈3) cm H2O in the PEEP (vs. PEEP plus continuous negative abdominal pressure) groups. Esophageal pressure, hemodynamics, and electrical impedance tomography were recorded, and injury determined by lung wet/dry weight ratio and interleukin-6 expression. All animals survived, but cardiac output was decreased in the PEEP group. Addition of continuous negative abdominal pressure to PEEP resulted in greater oxygenation (PaO2/fractional inspired oxygen 316 ± 134 vs. 80 ± 24 mmHg at 4 h, P = 0.005), compliance (14.2 ± 3.0 vs. 10.3 ± 2.2 ml/cm H2O, P = 0.049), and homogeneity of ventilation, with less pulmonary edema (≈10% less) and interleukin-6 expression (≈30% less). Continuous negative abdominal pressure added to PEEP reduces ventilator-induced lung injury in a pig model compared with PEEP alone, despite targeting identical expiratory transpulmonary pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
2015-11-19
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
ERIC Educational Resources Information Center
Bergee, Martin J.; Westfall, Claude R.
2005-01-01
This is the third study in a line of inquiry whose purpose has been to develop a theoretical model of selected extra musical variables' influence on solo and small-ensemble festival ratings. Authors of the second of these (Bergee & McWhirter, 2005) had used binomial logistic regression as the basis for their model-formulation strategy. Their…
Etcheverry, Paul E; Waters, Andrew J; Lam, Cho; Correa-Fernandez, Virmarie; Vidrine, Jennifer Irvin; Cinciripini, Paul M; Wetter, David W
2016-08-01
To examine whether initial orienting (IO) and inability to disengage (ITD) attention from negative affective stimuli moderate the association of negative affect with smoking abstinence during a quit attempt. Data were from a longitudinal cohort study of smoking cessation (N = 424). A negative affect modified Stroop task was administered 1 week before and on quit day to measure IO and ITD. Ecological Momentary Assessments were used to create negative affect intercepts and linear slopes for the week before quitting and on quit day. Quit day and long-term abstinence measures were collected. Continuation ratio logit model analyses found significant interactions for prequit negative affect slope with prequit ITD, odds ratio (OR) = 0.738 (0.57, 0.96), p = .02, and for quit day negative affect intercept with quit day ITD, OR = 0.62 (0.41, 950), p = .03, predicting abstinence. The Prequit Negative Affect Intercept × Prequit IO interaction predicting quit day abstinence was significant, OR = 1.42 (1.06, 1.90), p = .02, as was the Quit Day Negative Affect Slope × Quit Day IO interaction predicting long-term abstinence, OR = 1.45 (1.02, 2.08), p = .04. The hypothesis that the association of negative affect with smoking abstinence would be moderated by ITD was generally supported. Among individuals with high ITD, negative affect was inversely related to abstinence, but unrelated to abstinence among individuals with lower levels of ITD. Unexpectedly, among individuals with low IO, negative affect was inversely related to abstinence, but unrelated to abstinence among individuals with higher levels of ITD. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Dissociable roles of dopamine and serotonin transporter function in a rat model of negative urgency.
Yates, Justin R; Darna, Mahesh; Gipson, Cassandra D; Dwoskin, Linda P; Bardo, Michael T
2015-09-15
Negative urgency is a facet of impulsivity that reflects mood-based rash action and is associated with various maladaptive behaviors in humans. However, the underlying neural mechanisms of negative urgency are not fully understood. Several brain regions within the mesocorticolimbic pathway, as well as the neurotransmitters dopamine (DA) and serotonin (5-HT), have been implicated in impulsivity. Extracellular DA and 5-HT concentrations are regulated by DA transporters (DAT) and 5-HT transporters (SERT); thus, these transporters may be important molecular mechanisms underlying individual differences in negative urgency. The current study employed a reward omission task to model negative urgency in rats. During reward trials, a cue light signaled the non-contingent delivery of one sucrose pellet; immediately following the non-contingent reward, rats responded on a lever to earn sucrose pellets (operant phase). Omission trials were similar to reward trials, except that non-contingent sucrose was omitted following the cue light prior to the operant phase. As expected, contingent responding was higher following omission of expected reward than following delivery of expected reward, thus reflecting negative urgency. Upon completion of behavioral training, Vmax and Km were obtained from kinetic analysis of [(3)H]DA and [(3)H]5-HT uptake using synaptosomes prepared from nucleus accumbens (NAc), dorsal striatum (Str), medial prefrontal cortex (mPFC), and orbitofrontal cortex (OFC) isolated from individual rats. Vmax for DAT in NAc and for SERT in OFC were positively correlated with negative urgency scores. The current findings suggest that mood-based impulsivity (negative urgency) is associated with enhanced DAT function in NAc and SERT function in OFC. Copyright © 2015 Elsevier B.V. All rights reserved.
Rocheleau, J P; Michel, P; Lindsay, L R; Drebot, M; Dibernardo, A; Ogden, N H; Fortin, A; Arsenault, J
2017-10-01
The identification of specific environments sustaining emerging arbovirus amplification and transmission to humans is a key component of public health intervention planning. This study aimed at identifying environmental factors associated with West Nile virus (WNV) infections in southern Quebec, Canada, by modelling and jointly interpreting aggregated clinical data in humans and serological data in pet dogs. Environmental risk factors were estimated in humans by negative binomial regression based on a dataset of 191 human WNV clinical cases reported in the study area between 2011 and 2014. Risk factors for infection in dogs were evaluated by logistic and negative binomial models based on a dataset including WNV serological results from 1442 dogs sampled from the same geographical area in 2013. Forested lands were identified as low-risk environments in humans. Agricultural lands represented higher risk environments for dogs. Environments identified as impacting risk in the current study were somewhat different from those identified in other studies conducted in north-eastern USA, which reported higher risk in suburban environments. In the context of the current study, combining human and animal data allowed a more comprehensive and possibly a more accurate view of environmental WNV risk factors to be obtained than by studying aggregated human data alone.
Effectiveness on Early Childhood Caries of an Oral Health Promotion Program for Medical Providers
Widmer-Racich, Katina; Sevick, Carter; Starzyk, Erin J.; Mauritson, Katya; Hambidge, Simon J.
2017-01-01
Objectives. To assess an oral health promotion (OHP) intervention for medical providers’ impact on early childhood caries (ECC). Methods. We implemented a quasiexperimental OHP intervention in 8 federally qualified health centers that trained medical providers on ECC risk assessment, oral examination and instruction, dental referral, and fluoride varnish applications (FVAs). We measured OHP delivery by FVA count at medical visits. We measured the intervention’s impact on ECC in 3 unique cohorts of children aged 3 to 4 years in 2009 (preintervention; n = 202), 2011 (midintervention; n = 420), and 2015 (≥ 4 FVAs; n = 153). We compared numbers of decayed, missing, and filled tooth surfaces using adjusted zero-inflated negative binomial models. Results. Across 3 unique cohorts, the FVA mean (range) count was 0.0 (0), 1.1 (0–7), and 4.5 (4–7) in 2009, 2011, and 2015, respectively. In adjusted zero-inflated negative binomial models analyses, children in the 2015 cohort had significantly fewer decayed, missing, and filled tooth surfaces than did children in previous cohorts. Conclusions. An OHP intervention targeting medical providers reduced ECC when children received 4 or more FVAs at a medical visit by age 3 years. PMID:28661802
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. (c) 2016 APA, all rights reserved).
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Jaya, E S; Ascone, L; Lincoln, T M
2018-06-01
Cognitive models postulate that negative-self-schemas (NSS) cause and maintain positive symptoms and that negative affect mediates this link. However, only few studies have tested the temporal mediation claim systematically using an appropriate design. A longitudinal cohort design in an online community sample (N = 962) from Germany, Indonesia, and the USA was used. NSS, negative affect and positive symptoms were measured at four time-points (T0-T3) over a 1-year period. Cross-lagged panel and longitudinal mediation analyses with structural equation modeling were used to test the temporal mediation. Independent cross-lagged panel models showed a significant unidirectional longitudinal path from NSS to positive symptoms (T2-T3, β = 0.18, p < 0.01) and bidirectional longitudinal associations from NSS to negative affect (T0-T1, γ = 0.14, p < 0.01) and vice versa (T0-T1, γ = 0.19, p < 0.01). There was also a significant indirect pathway from NSS at baseline via negative affect at T1 and T2 to positive symptoms at T3 (unstandardized indirect effect coefficient = 0.020, p < 0.05, BCa CI 0.004-0.035), indicating mediation. Our findings support the postulated affective pathway from NSS to positive symptoms via negative affect. Specifically, our data indicate that NSS and negative affect influence each other and build up over the course of several months before leading on to positive symptoms. We conclude that interrupting this process by targeting NSS and negative affect early in the process could be a promising strategy to prevent the exacerbation of positive symptoms.
Revisiting negative selection algorithms.
Ji, Zhou; Dasgupta, Dipankar
2007-01-01
This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.
Negative capacitance in multidomain ferroelectric superlattices
NASA Astrophysics Data System (ADS)
Zubko, Pavlo; Wojdeł, Jacek C.; Hadjimichael, Marios; Fernandez-Pena, Stéphanie; Sené, Anaïs; Luk'Yanchuk, Igor; Triscone, Jean-Marc; Íñiguez, Jorge
2016-06-01
The stability of spontaneous electrical polarization in ferroelectrics is fundamental to many of their current applications, which range from the simple electric cigarette lighter to non-volatile random access memories. Research on nanoscale ferroelectrics reveals that their behaviour is profoundly different from that in bulk ferroelectrics, which could lead to new phenomena with potential for future devices. As ferroelectrics become thinner, maintaining a stable polarization becomes increasingly challenging. On the other hand, intentionally destabilizing this polarization can cause the effective electric permittivity of a ferroelectric to become negative, enabling it to behave as a negative capacitance when integrated in a heterostructure. Negative capacitance has been proposed as a way of overcoming fundamental limitations on the power consumption of field-effect transistors. However, experimental demonstrations of this phenomenon remain contentious. The prevalent interpretations based on homogeneous polarization models are difficult to reconcile with the expected strong tendency for domain formation, but the effect of domains on negative capacitance has received little attention. Here we report negative capacitance in a model system of multidomain ferroelectric-dielectric superlattices across a wide range of temperatures, in both the ferroelectric and paraelectric phases. Using a phenomenological model, we show that domain-wall motion not only gives rise to negative permittivity, but can also enhance, rather than limit, its temperature range. Our first-principles-based atomistic simulations provide detailed microscopic insight into the origin of this phenomenon, identifying the dominant contribution of near-interface layers and paving the way for its future exploitation.
van der Meulen, Miriam P; Lansdorp-Vogelaar, Iris; van Heijningen, Else-Mariëtte B; Kuipers, Ernst J; van Ballegooijen, Marjolein
2016-06-01
If some adenomas do not bleed over several years, they will cause systematic false-negative fecal immunochemical test (FIT) results. The long-term effectiveness of FIT screening has been estimated without accounting for such systematic false-negativity. There are now data with which to evaluate this issue. The authors developed one microsimulation model (MISCAN [MIcrosimulation SCreening ANalysis]-Colon) without systematic false-negative FIT results and one model that allowed a percentage of adenomas to be systematically missed in successive FIT screening rounds. Both variants were adjusted to reproduce the first-round findings of the Dutch CORERO FIT screening trial. The authors then compared simulated detection rates in the second screening round with those observed, and adjusted the simulated percentage of systematically missed adenomas to those data. Finally, the authors calculated the impact of systematic false-negative FIT results on the effectiveness of repeated FIT screening. The model without systematic false-negativity simulated higher detection rates in the second screening round than observed. These observed rates could be reproduced when assuming that FIT systematically missed 26% of advanced and 73% of nonadvanced adenomas. To reduce the false-positive rate in the second round to the observed level, the authors also had to assume that 30% of false-positive findings were systematically false-positive. Systematic false-negative FIT testing limits the long-term reduction of biennial FIT screening in the incidence of colorectal cancer (35.6% vs 40.9%) and its mortality (55.2% vs 59.0%) in participants. The results of the current study provide convincing evidence based on the combination of real-life and modeling data that a percentage of adenomas are systematically missed by repeat FIT screening. This impairs the efficacy of FIT screening. Cancer 2016;122:1680-8. © 2016 American Cancer Society. © 2016 American Cancer Society.
Valiente, Carlos; Eisenberg, Nancy; Shepard, Stephanie A; Fabes, Richard A; Cumberland, Amanda J; Losoya, Sandra H; Spinrad, Tracy L
2004-03-01
Guided by the heuristic model proposed by Eisenberg et al. [Psychol. Inq. 9 (1998) 241], we examined the relations of mothers' reported and observed negative expressivity to children's (N = 159; 74 girls; M age = 7.67 years) experience and expression of emotion. Children's experience and/or expression of emotion in response to a distressing film were measured with facial, heart rate, and self-report measures. Children's heart rate and facial distress were modestly positively related. Children's facial distress was significantly positively related to mothers' reports of negative (dominant and submissive) expressivity; the positive relation between children's facial distress and mothers' observed negative expressivity approached the conventional level of significance. Moreover, mothers' observed negative expressivity was significantly negatively related to children's heart rate reactivity during the conflict film. The positive relation between children's reported distress and mothers' observed negative expressivity approached the conventional level of significance. Several possible explanations for the pattern of findings are discussed.
Attentional Bias to Negative Affect Moderates Negative Affect’s Relationship with Smoking Abstinence
Etcheverry, Paul E.; Waters, Andrew J.; Lam, Cho; Correa-Fernandez, Virmarie; Vidrine, Jennifer Irvin; Cinciripini, Paul M.; Wetter, David W.
2016-01-01
Objective To examine whether initial orienting (IO) and inability to disengage attention (ITD) from negative affective stimuli moderate the association of negative affect with smoking abstinence during a quit attempt. Methods Data were from a longitudinal cohort study of smoking cessation (N=424). A negative affect modified Stroop was administered one week before and on quit day to measure IO and ITD. Ecological Momentary Assessments were used to create negative affect intercepts and linear slopes for the week before quitting and on quit day. Quit day and long-term abstinence measures were collected. Results Continuation ratio (CR) logit model analyses found significant interactions of pre-quit negative affect slope with pre-quit ITD [OR = .738(.57, .96), p= .02] and quit day negative affect intercept with quit day ITD [OR = .62(.41, 950), p= .03] predicting abstinence. The interaction of pre-quit negative affect intercept and pre-quit IO predicting quit day abstinence was significant [OR = 1.42(1.06, 1.90), p= .02], as was the interaction of quit day negative affect slope and quit day IO predicting long-term abstinence [OR = 1.45(1.02, 2.08), p= .04]. Conclusions The hypothesis that the association of negative affect with smoking abstinence would be moderated by ITD was generally supported. Among individuals with high ITD, negative affect was inversely related to abstinence, but unrelated to abstinence among individuals with lower levels of ITD. Unexpectedly, among individuals with low IO negative affect was inversely related to abstinence, but unrelated to abstinence among individuals with higher levels of ITD. PMID:27505211
Furr-Holden, C Debra M; Milam, Adam J; Nesoff, Elizabeth D; Johnson, Renee M; Fakunle, David O; Jennings, Jacky M; Thorpe, Roland J
2016-01-01
This research examined whether publicly funded drug treatment centers (DTCs) were associated with violent crime in excess of the violence happening around other commercial businesses. Violent crime data and locations of community entities were geocoded and mapped. DTCs and other retail outlets were matched based on a Neighborhood Disadvantage score at the census tract level. Street network buffers ranging from 100 to 1,400 feet were placed around each location. Negative binomial regression models were used to estimate the relationship between the count of violent crimes and the distance from each business type. Compared with the mean count of violent crime around drug treatment centers, the mean count of violent crime (M = 2.87) was significantly higher around liquor stores (M = 3.98; t test; p < .01) and corner stores (M = 3.78; t test; p < .01), and there was no statistically significant difference between the count around convenience stores (M = 2.65; t test; p = .32). In the adjusted negative binomial regression models, there was a negative and significant relationship between the count of violent crime and the distance from drug treatment centers (β = -.069, p < .01), liquor stores (β = -.081, p < .01), corner stores (β = -.116, p < .01), and convenience stores (β = -.154, p < .01). Violent crime associated with drug treatment centers is similar to that associated with liquor stores and is less frequent than that associated with convenience stores and corner stores.
Furr-Holden, C. Debra M.; Milam, Adam J.; Nesoff, Elizabeth D.; Johnson, Renee M.; Fakunle, David O.; Jennings, Jacky M.; Thorpe, Roland J.
2016-01-01
Objective: This research examined whether publicly funded drug treatment centers (DTCs) were associated with violent crime in excess of the violence happening around other commercial businesses. Method: Violent crime data and locations of community entities were geocoded and mapped. DTCs and other retail outlets were matched based on a Neighborhood Disadvantage score at the census tract level. Street network buffers ranging from 100 to 1,400 feet were placed around each location. Negative binomial regression models were used to estimate the relationship between the count of violent crimes and the distance from each business type. Results: Compared with the mean count of violent crime around drug treatment centers, the mean count of violent crime (M = 2.87) was significantly higher around liquor stores (M = 3.98; t test; p < .01) and corner stores (M = 3.78; t test; p < .01), and there was no statistically significant difference between the count around convenience stores (M = 2.65; t test; p = .32). In the adjusted negative binomial regression models, there was a negative and significant relationship between the count of violent crime and the distance from drug treatment centers (β = -.069, p < .01), liquor stores (β = -.081, p < .01), corner stores (β = -.116, p < .01), and convenience stores (β = -.154, p < .01). Conclusions: Violent crime associated with drug treatment centers is similar to that associated with liquor stores and is less frequent than that associated with convenience stores and corner stores. PMID:26751351
The IDEA model: A single equation approach to the Ebola forecasting challenge.
Tuite, Ashleigh R; Fisman, David N
2018-03-01
Mathematical modeling is increasingly accepted as a tool that can inform disease control policy in the face of emerging infectious diseases, such as the 2014-2015 West African Ebola epidemic, but little is known about the relative performance of alternate forecasting approaches. The RAPIDD Ebola Forecasting Challenge (REFC) tested the ability of eight mathematical models to generate useful forecasts in the face of simulated Ebola outbreaks. We used a simple, phenomenological single-equation model (the "IDEA" model), which relies only on case counts, in the REFC. Model fits were performed using a maximum likelihood approach. We found that the model performed reasonably well relative to other more complex approaches, with performance metrics ranked on average 4th or 5th among participating models. IDEA appeared better suited to long- than short-term forecasts, and could be fit using nothing but reported case counts. Several limitations were identified, including difficulty in identifying epidemic peak (even retrospectively), unrealistically precise confidence intervals, and difficulty interpolating daily case counts when using a model scaled to epidemic generation time. More realistic confidence intervals were generated when case counts were assumed to follow a negative binomial, rather than Poisson, distribution. Nonetheless, IDEA represents a simple phenomenological model, easily implemented in widely available software packages that could be used by frontline public health personnel to generate forecasts with accuracy that approximates that which is achieved using more complex methodologies. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Effect of Divalent Cation Removal on the Structure of Gram-Negative Bacterial Outer Membrane Models
2014-01-01
The Gram-negative bacterial outer membrane (GNB-OM) is asymmetric in its lipid composition with a phospholipid-rich inner leaflet and an outer leaflet predominantly composed of lipopolysaccharides (LPS). LPS are polyanionic molecules, with numerous phosphate groups present in the lipid A and core oligosaccharide regions. The repulsive forces due to accumulation of the negative charges are screened and bridged by the divalent cations (Mg2+ and Ca2+) that are known to be crucial for the integrity of the bacterial OM. Indeed, chelation of divalent cations is a well-established method to permeabilize Gram-negative bacteria such as Escherichia coli. Here, we use X-ray and neutron reflectivity (XRR and NR, respectively) techniques to examine the role of calcium ions in the stability of a model GNB-OM. Using XRR we show that Ca2+ binds to the core region of the rough mutant LPS (RaLPS) films, producing more ordered structures in comparison to divalent cation free monolayers. Using recently developed solid-supported models of the GNB-OM, we study the effect of calcium removal on the asymmetry of DPPC:RaLPS bilayers. We show that without the charge screening effect of divalent cations, the LPS is forced to overcome the thermodynamically unfavorable energy barrier and flip across the hydrophobic bilayer to minimize the repulsive electrostatic forces, resulting in about 20% mixing of LPS and DPPC between the inner and outer bilayer leaflets. These results reveal for the first time the molecular details behind the well-known mechanism of outer membrane stabilization by divalent cations. This confirms the relevance of the asymmetric models for future studies of outer membrane stability and antibiotic penetration. PMID:25489959
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
ERIC Educational Resources Information Center
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Probst, Tahira M
2005-10-01
This study examined the effectiveness of increased organizational participative decision making in attenuating the negative consequences of job insecurity. Data were collected from 807 employees in 6 different companies. Analyses suggest that job insecurity is related to lower coworker, work, and supervisor satisfaction and higher turnover intentions and work withdrawal behaviors. However, employees with greater participative decision-making opportunities reported fewer negative consequences of job insecurity compared with employees with fewer participative decision-making opportunities. Results are interpreted using the demand-control model and suggest that organizations that allow greater employee participative decision making may experience fewer negative side effects from today's rising levels of employee job insecurity. Copyright (c) 2005 APA, all rights reserved.
Sato, Masanao; Tsuda, Kenichi; Wang, Lin; Coller, John; Watanabe, Yuichiro; Glazebrook, Jane; Katagiri, Fumiaki
2010-01-01
Biological signaling processes may be mediated by complex networks in which network components and network sectors interact with each other in complex ways. Studies of complex networks benefit from approaches in which the roles of individual components are considered in the context of the network. The plant immune signaling network, which controls inducible responses to pathogen attack, is such a complex network. We studied the Arabidopsis immune signaling network upon challenge with a strain of the bacterial pathogen Pseudomonas syringae expressing the effector protein AvrRpt2 (Pto DC3000 AvrRpt2). This bacterial strain feeds multiple inputs into the signaling network, allowing many parts of the network to be activated at once. mRNA profiles for 571 immune response genes of 22 Arabidopsis immunity mutants and wild type were collected 6 hours after inoculation with Pto DC3000 AvrRpt2. The mRNA profiles were analyzed as detailed descriptions of changes in the network state resulting from the genetic perturbations. Regulatory relationships among the genes corresponding to the mutations were inferred by recursively applying a non-linear dimensionality reduction procedure to the mRNA profile data. The resulting static network model accurately predicted 23 of 25 regulatory relationships reported in the literature, suggesting that predictions of novel regulatory relationships are also accurate. The network model revealed two striking features: (i) the components of the network are highly interconnected; and (ii) negative regulatory relationships are common between signaling sectors. Complex regulatory relationships, including a novel negative regulatory relationship between the early microbe-associated molecular pattern-triggered signaling sectors and the salicylic acid sector, were further validated. We propose that prevalent negative regulatory relationships among the signaling sectors make the plant immune signaling network a “sector-switching” network, which
Differential expression analysis for RNAseq using Poisson mixed models
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny
2017-01-01
Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632
Differential expression analysis for RNAseq using Poisson mixed models.
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang
2017-06-20
Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
FluBreaks: early epidemic detection from Google flu trends.
Pervaiz, Fahad; Pervaiz, Mansoor; Abdur Rehman, Nabeel; Saif, Umar
2012-10-04
The Google Flu Trends service was launched in 2008 to track changes in the volume of online search queries related to flu-like symptoms. Over the last few years, the trend data produced by this service has shown a consistent relationship with the actual number of flu reports collected by the US Centers for Disease Control and Prevention (CDC), often identifying increases in flu cases weeks in advance of CDC records. However, contrary to popular belief, Google Flu Trends is not an early epidemic detection system. Instead, it is designed as a baseline indicator of the trend, or changes, in the number of disease cases. To evaluate whether these trends can be used as a basis for an early warning system for epidemics. We present the first detailed algorithmic analysis of how Google Flu Trends can be used as a basis for building a fully automated system for early warning of epidemics in advance of methods used by the CDC. Based on our work, we present a novel early epidemic detection system, called FluBreaks (dritte.org/flubreaks), based on Google Flu Trends data. We compared the accuracy and practicality of three types of algorithms: normal distribution algorithms, Poisson distribution algorithms, and negative binomial distribution algorithms. We explored the relative merits of these methods, and related our findings to changes in Internet penetration and population size for the regions in Google Flu Trends providing data. Across our performance metrics of percentage true-positives (RTP), percentage false-positives (RFP), percentage overlap (OT), and percentage early alarms (EA), Poisson- and negative binomial-based algorithms performed better in all except RFP. Poisson-based algorithms had average values of 99%, 28%, 71%, and 76% for RTP, RFP, OT, and EA, respectively, whereas negative binomial-based algorithms had average values of 97.8%, 17.8%, 60%, and 55% for RTP, RFP, OT, and EA, respectively. Moreover, the EA was also affected by the region's population size
NASA Astrophysics Data System (ADS)
Elkamash, I. S.; Kourakis, I.
2018-05-01
The criteria for occurrence and the dynamical features of electrostatic solitary waves in a homogeneous, unmagnetized ultradense plasma penetrated by a negative ion beam are investigated, relying on a quantum hydrodynamic model. The ionic components are modeled as inertial fluids, while the relativistic electrons obey Fermi-Dirac statistics. A new set of exact analytical conditions for localized solitary pulses to exist is obtained, in terms of plasma density. The algebraic analysis reveals that these depend sensitively on the negative ion beam characteristics, that is, the beam velocity and density. Particular attention is paid to the simultaneous occurrence of positive and negative potential pulses, identified by their respective distinct ambipolar electric field structure forms. It is shown that the coexistence of positive and negative potential pulses occurs in a certain interval of parameter values, where the ion beam inertia becomes significant.
Norton, Kerri-Ann; Jin, Kideok; Popel, Aleksander S
2018-05-08
A hallmark of breast tumors is its spatial heterogeneity that includes its distribution of cancer stem cells and progenitor cells, but also heterogeneity in the tumor microenvironment. In this study we focus on the contributions of stromal cells, specifically macrophages, fibroblasts, and endothelial cells on tumor progression. We develop a computational model of triple-negative breast cancer based on our previous work and expand it to include macrophage infiltration, fibroblasts, and angiogenesis. In vitro studies have shown that the secretomes of tumor-educated macrophages and fibroblasts increase both the migration and proliferation rates of triple-negative breast cancer cells. In vivo studies also demonstrated that blocking signaling of selected secreted factors inhibits tumor growth and metastasis in mouse xenograft models. We investigate the influences of increased migration and proliferation rates on tumor growth, the effect of the presence on fibroblasts or macrophages on growth and morphology, and the contributions of macrophage infiltration on tumor growth. We find that while the presence of macrophages increases overall tumor growth, the increase in macrophage infiltration does not substantially increase tumor growth and can even stifle tumor growth at excessive rates. Copyright © 2018. Published by Elsevier Ltd.
Modeling of negative ion transport in a plasma source (invited)
NASA Astrophysics Data System (ADS)
Riz, David; Paméla, Jérôme
1998-02-01
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The H-/D- trajectory is calculated by numerically solving the 3D motion equation, while the atomic processes of destruction, of elastic collision with H+/D+ and of charge exchange with H0/D0 are handled at each time step by a Monte Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have been allowed to explain, either quantitatively or qualitatively, several phenomena observed in negative ion sources, such as the isotopic H-/D- effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that, in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm-3), negative ions can reach the extraction region provided they are produced at a distance lower than 2 cm from the plasma grid in the case of volume production (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.
Analysis of dengue fever risk using geostatistics model in bone regency
NASA Astrophysics Data System (ADS)
Amran, Stang, Mallongi, Anwar
2017-03-01
This research aim is to analysis of dengue fever risk based on Geostatistics model in Bone Regency. Risk levels of dengue fever are denoted by parameter of Binomial distribution. Effect of temperature, rainfalls, elevation, and larvae abundance are investigated through Geostatistics model. Bayesian hierarchical method is used in estimation process. Using dengue fever data in eleven locations this research shows that temperature and rainfall have significant effect of dengue fever risk in Bone regency.
Modeling and design of a beam emission spectroscopy diagnostic for the negative ion source NIO1
NASA Astrophysics Data System (ADS)
Barbisan, M.; Zaniol, B.; Cavenago, M.; Pasqualotto, R.
2014-02-01
Consorzio RFX and INFN-LNL are building a flexible small ion source (Negative Ion Optimization 1, NIO1) capable of producing about 130 mA of H- ions accelerated at 60 KeV. Aim of the experiment is to test and develop the instrumentation for SPIDER and MITICA, the prototypes, respectively, of the negative ion sources and of the whole neutral beam injectors which will operate in the ITER experiment. As SPIDER and MITICA, NIO1 will be monitored with beam emission spectroscopy (BES), a non-invasive diagnostic based on the analysis of the spectrum of the Hα emission produced by the interaction of the energetic ions with the background gas. Aim of BES is to monitor direction, divergence, and uniformity of the ion beam. The precision of these measurements depends on a number of factors related to the physics of production and acceleration of the negative ions, to the geometry of the beam, and to the collection optics. These elements were considered in a set of codes developed to identify the configuration of the diagnostic which minimizes the measurement errors. The model was already used to design the BES diagnostic for SPIDER and MITICA. The paper presents the model and describes its application to design the BES diagnostic in NIO1.
Tobit analysis of vehicle accident rates on interstate highways.
Anastasopoulos, Panagiotis Ch; Tarko, Andrew P; Mannering, Fred L
2008-03-01
There has been an abundance of research that has used Poisson models and its variants (negative binomial and zero-inflated models) to improve our understanding of the factors that affect accident frequencies on roadway segments. This study explores the application of an alternate method, tobit regression, by viewing vehicle accident rates directly (instead of frequencies) as a continuous variable that is left-censored at zero. Using data from vehicle accidents on Indiana interstates, the estimation results show that many factors relating to pavement condition, roadway geometrics and traffic characteristics significantly affect vehicle accident rates.
Effect of Divalent Cation Removal on the Structure of Gram-Negative Bacterial Outer Membrane Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clifton, Luke A.; Skoda, Maximilian W. A.; Le Brun, Anton P.
The Gram-negative bacterial outer membrane (GNB-OM) is asymmetric in its lipid composition with a phospholipid-rich inner leaflet and an outer leaflet predominantly composed of lipopolysaccharides (LPS). LPS are polyanionic molecules, with numerous phosphate groups present in the lipid A and core oligosaccharide regions. The repulsive forces due to accumulation of the negative charges are screened and bridged by the divalent cations (Mg 2+ and Ca 2+) that are known to be crucial for the integrity of the bacterial OM. Indeed, chelation of divalent cations is a well-established method to permeabilize Gram-negative bacteria such as Escherichia coli. Here, we use X-raymore » and neutron reflectivity (XRR and NR, respectively) techniques to examine the role of calcium ions in the stability of a model GNB-OM. Using XRR we show that Ca 2+ binds to the core region of the rough mutant LPS (RaLPS) films, producing more ordered structures in comparison to divalent cation free monolayers. Using recently developed solid-supported models of the GNB-OM, we study the effect of calcium removal on the asymmetry of DPPC:RaLPS bilayers. We show that without the charge screening effect of divalent cations, the LPS is forced to overcome the thermodynamically unfavorable energy barrier and flip across the hydrophobic bilayer to minimize the repulsive electrostatic forces, resulting in about 20% mixing of LPS and DPPC between the inner and outer bilayer leaflets. These results reveal for the first time the molecular details behind the well-known mechanism of outer membrane stabilization by divalent cations. This confirms the relevance of the asymmetric models for future studies of outer membrane stability and antibiotic penetration.« less
Effect of Divalent Cation Removal on the Structure of Gram-Negative Bacterial Outer Membrane Models
Clifton, Luke A.; Skoda, Maximilian W. A.; Le Brun, Anton P.; ...
2014-12-09
The Gram-negative bacterial outer membrane (GNB-OM) is asymmetric in its lipid composition with a phospholipid-rich inner leaflet and an outer leaflet predominantly composed of lipopolysaccharides (LPS). LPS are polyanionic molecules, with numerous phosphate groups present in the lipid A and core oligosaccharide regions. The repulsive forces due to accumulation of the negative charges are screened and bridged by the divalent cations (Mg 2+ and Ca 2+) that are known to be crucial for the integrity of the bacterial OM. Indeed, chelation of divalent cations is a well-established method to permeabilize Gram-negative bacteria such as Escherichia coli. Here, we use X-raymore » and neutron reflectivity (XRR and NR, respectively) techniques to examine the role of calcium ions in the stability of a model GNB-OM. Using XRR we show that Ca 2+ binds to the core region of the rough mutant LPS (RaLPS) films, producing more ordered structures in comparison to divalent cation free monolayers. Using recently developed solid-supported models of the GNB-OM, we study the effect of calcium removal on the asymmetry of DPPC:RaLPS bilayers. We show that without the charge screening effect of divalent cations, the LPS is forced to overcome the thermodynamically unfavorable energy barrier and flip across the hydrophobic bilayer to minimize the repulsive electrostatic forces, resulting in about 20% mixing of LPS and DPPC between the inner and outer bilayer leaflets. These results reveal for the first time the molecular details behind the well-known mechanism of outer membrane stabilization by divalent cations. This confirms the relevance of the asymmetric models for future studies of outer membrane stability and antibiotic penetration.« less
Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.
2004-01-01
Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.
Cohen, Joseph R.; Hankin, Benjamin L.; Gibb, Brandon E.; Hammen, Constance; Hazel, Nicholas A.; Ma, Denise; Yao, Shuqiao; Zhu, Xiong Zhao; Abela, John R.Z.
2014-01-01
Objective The present study examined the relation between attachment cognitions, stressors, and emotional distress in a sample of Chinese adolescents. Specifically, it was examined whether negative attachment cognitions predicted depression and anxiety symptoms, and if a vulnerability-stress or stress generation model best explained the relation between negative attachment cognitions and internalizing symptoms. Method Participants included 558 adolescents (310 females and 248 males) from an urban school in Changsha, and 592 adolescents (287 females and 305 males) from a rural school in Liuyang, both in Hunan province located in mainland China. Participants completed self-report measures of negative attachment cognitions at baseline, and self-report measures of negative events, depression symptoms, and anxiety symptoms at baseline and at regular one month intervals for an overall 6-month follow-up (i.e., six follow-up assessments). Results Higher levels of negative attachment cognitions predicted prospective depression and anxiety symptoms. Furthermore, support was found for a stress generation model that partially mediated this longitudinal association. No support was found for a vulnerability-stress model. Conclusion Overall, these findings highlight new developmental pathways for development of depression and anxiety symptoms in mainland Chinese adolescents. PMID:23237030
Valiente, Carlos; Eisenberg, Nancy; Shepard, Stephanie A.; Fabes, Richard A.; Cumberland, Amanda J.; Losoya, Sandra H.; Spinrad, Tracy L.
2010-01-01
Guided by the heuristic model proposed by Eisenberg et al. [Psychol. Inq. 9 (1998) 241], we examined the relations of mothers’ reported and observed negative expressivity to children’s (N = 159; 74 girls; M age = 7.67 years) experience and expression of emotion. Children’s experience and/or expression of emotion in response to a distressing film were measured with facial, heart rate, and self-report measures. Children’s heart rate and facial distress were modestly positively related. Children’s facial distress was significantly positively related to mothers’ reports of negative (dominant and submissive) expressivity; the positive relation between children’s facial distress and mothers’ observed negative expressivity approached the conventional level of significance. Moreover, mothers’ observed negative expressivity was significantly negatively related to children’s heart rate reactivity during the conflict film. The positive relation between children’s reported distress and mothers’ observed negative expressivity approached the conventional level of significance. Several possible explanations for the pattern of findings are discussed. PMID:20617103
Pieper, Laura; Sorge, Ulrike S; DeVries, Trevor J; Godkin, Ann; Lissemore, Kerry; Kelton, David F
2015-10-01
Johne's disease (JD) is a production-limiting gastrointestinal disease in cattle. To minimize the effects of JD, the Ontario dairy industry launched the Ontario Johne's Education and Management Assistance Program in 2010. As part of the program, trained veterinarians conducted a risk assessment and management plan (RAMP), an on-farm questionnaire where high RAMP scores are associated with high risk of JD transmission. Subsequently, veterinarians recommended farm-specific management practices for JD prevention. Milk or serum ELISA results from the milking herd were used to determine the herd ELISA status (HES) and within-herd prevalence. After 3.5 yr of implementation of the program, the aim of this study was to evaluate the associations among RAMP scores, HES, and recommendations. Data from 2,103 herds were available for the analyses. A zero-inflated negative binomial model for the prediction of the number of ELISA-positive animals per farm was built. The model included individual RAMP questions about purchasing animals in the logistic portion, indicating risks for between-herd transmission, and purchasing bulls, birth of calves outside the designated calving area, colostrum and milk feeding management, and adult cow environmental hygiene in the negative binomial portion, indicating risk factors for within-herd transmission. However, farms which fed low-risk milk compared with milk replacer had fewer seropositive animals. The model additionally included the JD herd history in the negative binomial and the logistic portion, indicating that herds with a JD herd history were more likely to have at least 1 positive animal and to have a higher number of positive animals. Generally, a positive association was noted between RAMP scores and the odds of receiving a recommendation for the respective risk area; however, the relationship was not always linear. For general JD risk and calving area risk, seropositive herds had higher odds of receiving recommendations compared
Robinson, Brent M; Elias, Lorin J
2005-04-01
Repeated exposure of a nonreinforced stimulus results in an increased preference for that stimulus, the mere exposure effect. The present study repeatedly presented positive, negative, and neutrally affective faces to 48 participants while they made judgments about the emotional expression. Participants then rated the likeability of novel neutrally expressive faces and some of these previously presented faces, this time in their neutral expression. Faces originally presented as happy were rated as the most likeable, followed by faces originally presented as neutral. Negative and novel faces were not rated significantly differently from each other. These findings support the notion that the increase in preference towards repeatedly presented stimuli is the result of the reduction in negative affect, consistent with the modified two-factor uncertainty-reduction model and classical conditioning model of the mere exposure effect.
Distribution pattern of phthirapterans infesting certain common Indian birds.
Saxena, A K; Kumar, Sandeep; Gupta, Nidhi; Mitra, J D; Ali, S A; Srivastava, Roshni
2007-08-01
The prevalence and frequency distribution patterns of 10 phthirapteran species infesting house sparrows, Indian parakeets, common mynas, and white breasted kingfishers were recorded in the district of Rampur, India, during 2004-05. The sample mean abundances, mean intensities, range of infestations, variance to mean ratios, values of the exponent of the negative binomial distribution, and the indices of discrepancy were also computed. Frequency distribution patterns of all phthirapteran species were skewed, but the observed frequencies did not correspond to the negative binomial distribution. Thus, adult-nymph ratios varied in different species from 1:0.53 to 1:1.25. Sex ratios of different phthirapteran species ranged from 1:1.10 to 1:1.65 and were female biased.
Moraleda, Cinta; de Deus, Nilsa; Serna-Bolea, Celia; Renom, Montse; Quintó, Llorenç; Macete, Eusebio; Menéndez, Clara; Naniche, Denise
2014-02-01
Up to 30% of infants may be HIV-exposed noninfected (ENI) in countries with high HIV prevalence, but the impact of maternal HIV on the child's health remains unclear. One hundred fifty-eight HIV ENI and 160 unexposed (UE) Mozambican infants were evaluated at 1, 3, 9, and 12 months postdelivery. At each visit, a questionnaire was administered, and HIV DNA polymerase chain reaction and hematologic and CD4/CD8 determinations were measured. Linear mixed models were used to evaluate differences in hematologic parameters and T-cell counts between the study groups. All outpatient visits and admissions were registered. ENI infants received cotrimoxazol prophylaxis (CTXP). Negative binomial regression models were estimated to compare incidence rates of outpatient visits and admissions. Hematocrit was lower in ENI than in UE infants at 1, 3, and 9 months of age (P = 0.024, 0.025, and 0.012, respectively). Percentage of CD4 T cells was 3% lower (95% confidence interval: 0.86 to 5.15; P = 0.006) and percentage of CD8 T cells 1.15 times higher (95% confidence interval: 1.06 to 1.25; P = 0.001) in ENI vs. UE infants. ENI infants had a lower weight-for-age Z score (P = 0.049) but reduced incidence of outpatient visits, overall (P = 0.042), diarrhea (P = 0.001), and respiratory conditions (P = 0.042). ENI children were more frequently anemic, had poorer nutritional status, and alterations in some immunologic profiles compared with UE children. CTXP may explain their reduced mild morbidity. These findings may reinforce continuation of CTXP and the need to understand the consequences of maternal HIV exposure in this vulnerable group of children.
ERIC Educational Resources Information Center
Hutchinson, Delyse M.; Rapee, Ronald M.; Taylor, Alan
2010-01-01
This study tested five proposed models of the relationship of negative affect and peer factors in early adolescent body dissatisfaction, dieting, and bulimic behaviors. A large community sample of girls in early adolescence was assessed via questionnaire (X[overbar] age = 12.3 years). Structural equation modeling (SEM) indicated that negative…
Numerical renormalization group method for entanglement negativity at finite temperature
NASA Astrophysics Data System (ADS)
Shim, Jeongmin; Sim, H.-S.; Lee, Seung-Sup B.
2018-04-01
We develop a numerical method to compute the negativity, an entanglement measure for mixed states, between the impurity and the bath in quantum impurity systems at finite temperature. We construct a thermal density matrix by using the numerical renormalization group (NRG), and evaluate the negativity by implementing the NRG approximation that reduces computational cost exponentially. We apply the method to the single-impurity Kondo model and the single-impurity Anderson model. In the Kondo model, the negativity exhibits a power-law scaling at temperature much lower than the Kondo temperature and a sudden death at high temperature. In the Anderson model, the charge fluctuation of the impurity contributes to the negativity even at zero temperature when the on-site Coulomb repulsion of the impurity is finite, while at low temperature the negativity between the impurity spin and the bath exhibits the same power-law scaling behavior as in the Kondo model.
3D printing the pterygopalatine fossa: a negative space model of a complex structure.
Bannon, Ross; Parihar, Shivani; Skarparis, Yiannis; Varsou, Ourania; Cezayirli, Enis
2018-02-01
The pterygopalatine fossa is one of the most complex anatomical regions to understand. It is poorly visualized in cadaveric dissection and most textbooks rely on schematic depictions. We describe our approach to creating a low-cost, 3D model of the pterygopalatine fossa, including its associated canals and foramina, using an affordable "desktop" 3D printer. We used open source software to create a volume render of the pterygopalatine fossa from axial slices of a head computerised tomography scan. These data were then exported to a 3D printer to produce an anatomically accurate model. The resulting 'negative space' model of the pterygopalatine fossa provides a useful and innovative aid for understanding the complex anatomical relationships of the pterygopalatine fossa. This model was designed primarily for medical students; however, it will also be of interest to postgraduates in ENT, ophthalmology, neurosurgery, and radiology. The technical process described may be replicated by other departments wishing to develop their own anatomical models whilst incurring minimal costs.
Hospital nurses' attitudes, negative perceptions, and negative acts regarding workplace bullying.
Ma, Shu-Ching; Wang, Hsiu-Hung; Chien, Tsair-Wei
2017-01-01
Workplace bullying is a prevalent problem in today's work places that has adverse effects on both bullying victims and organizations. To investigate the predictors of workplace bullying is an important task to prevent bullying victims of nurses in hospitals. This study aims to explore the relationships among nurses' attitudes, negative perceptions, and negative acts regarding workplace bullying under the framework of the theory of planned behavior (TPB). A total of 811 nurses from three hospitals in Taiwan were surveyed. Nurses' responses to the 201 items of 10 scales were calibrated using Rasch analysis and then subjected to path analysis with partial least-squares structural equation modeling (PLS-SEM). The instrumental attitude was significant predictors of nurses' negative perceptions to be bullied in the workplace. Instead, the other TPB components of subjective norm and perceived behavioral control were not effective predictors of nurses' negative acts regarding workplace bullying. The findings provided hospital nurse management with important implications for prevention of bullying, particularly to them who are tasked with providing safer and more productive workplaces to hospital nurses. Awareness of workplace bullying was recommended to other kinds of workplaces for further studies in future.
Rescreening of persons with a negative colonoscopy result: results from a microsimulation model.
Knudsen, Amy B; Hur, Chin; Gazelle, G Scott; Schrag, Deborah; McFarland, Elizabeth G; Kuntz, Karen M
2012-11-06
Persons with a negative result on screening colonoscopy are recommended to repeat the procedure in 10 years. To assess the effectiveness and costs of colonoscopy versus other rescreening strategies after an initial negative colonoscopy result. Microsimulation model. Literature and data from the Surveillance, Epidemiology, and End Results program. Persons aged 50 years who had no adenomas or cancer detected on screening colonoscopy. Lifetime. Societal. No further screening or rescreening starting at age 60 years with colonoscopy every 10 years, annual highly sensitive guaiac fecal occult blood testing (HSFOBT), annual fecal immunochemical testing (FIT), or computed tomographic colonography (CTC) every 5 years. Lifetime cases of colorectal cancer, life expectancy, and lifetime costs per 1000 persons, assuming either perfect or imperfect adherence. Rescreening with any method substantially reduced the risk for colorectal cancer compared with no further screening (range, 7.7 to 12.6 lifetime cases per 1000 persons [perfect adherence] and 17.7 to 20.9 lifetime cases per 1000 persons [imperfect adherence] vs. 31.3 lifetime cases per 1000 persons with no further screening). In both adherence scenarios, the differences in life-years across rescreening strategies were small (range, 30 893 to 30 902 life-years per 1000 persons [perfect adherence] vs. 30 865 to 30 869 life-years per 1000 persons [imperfect adherence]). Rescreening with HSFOBT, FIT, or CTC had fewer complications and was less costly than continuing colonoscopy. Results were sensitive to test-specific adherence rates. Data on adherence to rescreening were limited. Compared with the currently recommended strategy of continuing colonoscopy every 10 years after an initial negative examination, rescreening at age 60 years with annual HSFOBT, annual FIT, or CTC every 5 years provides approximately the same benefit in life-years with fewer complications at a lower cost. Therefore, it is reasonable to use other
Topics in Bayesian Hierarchical Modeling and its Monte Carlo Computations
NASA Astrophysics Data System (ADS)
Tak, Hyung Suk
The first chapter addresses a Beta-Binomial-Logit model that is a Beta-Binomial conjugate hierarchical model with covariate information incorporated via a logistic regression. Various researchers in the literature have unknowingly used improper posterior distributions or have given incorrect statements about posterior propriety because checking posterior propriety can be challenging due to the complicated functional form of a Beta-Binomial-Logit model. We derive data-dependent necessary and sufficient conditions for posterior propriety within a class of hyper-prior distributions that encompass those used in previous studies. Frequency coverage properties of several hyper-prior distributions are also investigated to see when and whether Bayesian interval estimates of random effects meet their nominal confidence levels. The second chapter deals with a time delay estimation problem in astrophysics. When the gravitational field of an intervening galaxy between a quasar and the Earth is strong enough to split light into two or more images, the time delay is defined as the difference between their travel times. The time delay can be used to constrain cosmological parameters and can be inferred from the time series of brightness data of each image. To estimate the time delay, we construct a Gaussian hierarchical model based on a state-space representation for irregularly observed time series generated by a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian approach jointly infers model parameters via a Gibbs sampler. We also introduce a profile likelihood of the time delay as an approximation of its marginal posterior distribution. The last chapter specifies a repelling-attracting Metropolis algorithm, a new Markov chain Monte Carlo method to explore multi-modal distributions in a simple and fast manner. This algorithm is essentially a Metropolis-Hastings algorithm with a proposal that consists of a downhill move in density that aims to make local modes
Tarr, Gillian A M; Eickhoff, Jens C; Koepke, Ruth; Hopfensperger, Daniel J; Davis, Jeffrey P; Conway, James H
2013-07-15
Pertussis remains difficult to control. Imperfect sensitivity of diagnostic tests and lack of specific guidance regarding interpretation of negative test results among patients with compatible symptoms may contribute to its spread. In this study, we examined whether additional pertussis cases could be identified if persons with negative pertussis test results were routinely investigated. We conducted interviews among 250 subjects aged ≤18 years with pertussis polymerase chain reaction (PCR) results reported from 2 reference laboratories in Wisconsin during July-September 2010 to determine whether their illnesses met the Centers for Disease Control and Prevention's clinical case definition (CCD) for pertussis. PCR validity measures were calculated using the CCD as the standard for pertussis disease. Two Bayesian latent class models were used to adjust the validity measures for pertussis detectable by 1) culture alone and 2) culture and/or more sensitive measures such as serology. Among 190 PCR-negative subjects, 54 (28%) had illnesses meeting the CCD. In adjusted analyses, PCR sensitivity and the negative predictive value were 1) 94% and 99% and 2) 43% and 87% in the 2 types of models, respectively. The models suggested that public health follow-up of reported pertussis patients with PCR-negative results leads to the detection of more true pertussis cases than follow-up of PCR-positive persons alone. The results also suggest a need for a more specific pertussis CCD.
NASA Astrophysics Data System (ADS)
Chen, Xiaoyue; Lan, Lei; Lu, Hailiang; Wang, Yu; Wen, Xishan; Du, Xinyu; He, Wangling
2017-10-01
A numerical simulation method of negative direct current (DC) corona discharge based on a plasma chemical model is presented, and a coaxial cylindrical gap is adopted. There were 15 particle species and 61 kinds of collision reactions electrons involved, and 22 kinds of reactions between ions are considered in plasma chemical reactions. Based on this method, continuous Trichel pulses are calculated on about a 100 us timescale, and microcosmic physicochemical process of negative DC corona discharge in three different periods is discussed. The obtained results show that the amplitude of Trichel pulses is between 1-2 mA, and that pulse interval is in the order of 10-5 s. The positive ions produced by avalanche ionization enhanced the electric field near the cathode at the beginning of the pulse, then disappeared from the surface of cathode. The electric field decreases and the pulse ceases to develop. The negative ions produced by attachment slowly move away from the cathode, and the electric field increases gradually until the next pulse begins to develop. The positive and negative ions with the highest density during the corona discharge process are O4+ and O3- , respectively.
Fitting Cure Rate Model to Breast Cancer Data of Cancer Research Center.
Baghestani, Ahmad Reza; Zayeri, Farid; Akbari, Mohammad Esmaeil; Shojaee, Leyla; Khadembashi, Naghmeh; Shahmirzalou, Parviz
2015-01-01
The Cox PH model is one of the most significant statistical models in studying survival of patients. But, in the case of patients with long-term survival, it may not be the most appropriate. In such cases, a cure rate model seems more suitable. The purpose of this study was to determine clinical factors associated with cure rate of patients with breast cancer. In order to find factors affecting cure rate (response), a non-mixed cure rate model with negative binomial distribution for latent variable was used. Variables selected were recurrence cancer, status for HER2, estrogen receptor (ER) and progesterone receptor (PR), size of tumor, grade of cancer, stage of cancer, type of surgery, age at the diagnosis time and number of removed positive lymph nodes. All analyses were performed using PROC MCMC processes in the SAS 9.2 program. The mean (SD) age of patients was equal to 48.9 (11.1) months. For these patients, 1, 5 and 10-year survival rates were 95, 79 and 50 percent respectively. All of the mentioned variables were effective in cure fraction. Kaplan-Meier curve showed cure model's use competence. Unlike other variables, existence of ER and PR positivity will increase probability of cure in patients. In the present study, Weibull distribution was used for the purpose of analysing survival times. Model fitness with other distributions such as log-N and log-logistic and other distributions for latent variable is recommended.
Kizilcik, Isilay N; Gregory, Bree; Baillie, Andrew J; Crome, Erica
2016-01-01
Cognitive-behavioural models propose that excessive fear of negative evaluation is central to social anxiety. Moscovitch (2009) instead proposes that perceived deficiencies in three self attributes: fears of showing signs of anxiety, deficits in physical appearance, or deficits in social competence are at the core of social anxiety. However, these attributes are likely to overlap with fear of negative evaluation. Responses to an online survey of 286 participants with a range of social anxiety severity were analysed using hierarchical multiple regression to identify the overall unique predictive value of Moscovitch's model. Altogether, Moscovitch's model provided improvements in the prediction of safety behaviours, types of fears and cognitions; however only the fear of showing anxiety subscale provided unique information. This research supports further investigations into the utility of this revised model, particularly related to utility of explicitly assessing and addressing fears of showing anxiety. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Eiswerth, Mark E.; Kashian, Russell D.; Skidmore, Mark
2008-11-01
We use contingent behavior (CB) analysis to examine the potential impacts of a hypothetical change in the clarity of a lake. We collect and use both CB and revealed preference data to estimate a pooled negative binomial count data travel cost model. From this analysis we calculate the consumer surplus per angling party day for our case study lake to be about $104, or a total annual consumer surplus of $1.4 million. Using this consumer surplus measure and changes in the intended number of visits obtained from the CB survey, we estimate the loss in consumer surplus associated with a decline in water clarity from 10 to 3 feet (1 foot = 0.3048 m) to be about $522,000 annually (a 38% decrease). Since this is the first such application of CB analysis to estimate the effects of a water clarity change, the study may illustrate a method well suited to analyzing changes in water quality attributes that are easily observable and well understood by recreators.
ERIC Educational Resources Information Center
Hall, Kimberly R.; Rushing, Jeri Lynn; Khurshid, Ayesha
2011-01-01
Problem-focused interventions are considered to be one of the most effective group counseling strategies with adolescents. This article describes a problem-focused group counseling model, Solving Problems Together (SPT), that focuses on working with students who struggle with negative peer pressure. Adapted from the teaching philosophy of…
Carter, Evelene M; Potts, Henry W W
2014-04-04
To investigate whether factors can be identified that significantly affect hospital length of stay from those available in an electronic patient record system, using primary total knee replacements as an example. To investigate whether a model can be produced to predict the length of stay based on these factors to help resource planning and patient expectations on their length of stay. Data were extracted from the electronic patient record system for discharges from primary total knee operations from January 2007 to December 2011 (n=2,130) at one UK hospital and analysed for their effect on length of stay using Mann-Whitney and Kruskal-Wallis tests for discrete data and Spearman's correlation coefficient for continuous data. Models for predicting length of stay for primary total knee replacements were tested using the Poisson regression and the negative binomial modelling techniques. Factors found to have a significant effect on length of stay were age, gender, consultant, discharge destination, deprivation and ethnicity. Applying a negative binomial model to these variables was successful. The model predicted the length of stay of those patients who stayed 4-6 days (~50% of admissions) with 75% accuracy within 2 days (model data). Overall, the model predicted the total days stayed over 5 years to be only 88 days more than actual, a 6.9% uplift (test data). Valuable information can be found about length of stay from the analysis of variables easily extracted from an electronic patient record system. Models can be successfully created to help improve resource planning and from which a simple decision support system can be produced to help patient expectation on their length of stay.
Pedroza, Claudia; Truong, Van Thi Thanh
2017-11-02
Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.
Holzman, Jacob B; Burt, Nicole M; Edwards, Erin S; Rosinski, Leanna D; Bridgett, David J
2018-01-01
Temperament by parenting interactions may reflect that individuals with greater risk are more likely to experience negative outcomes in adverse contexts (diathesis-stress) or that these individuals are more susceptible to contextual influences in a 'for better or for worse' pattern (differential susceptibility). Although such interactions have been identified for a variety of child outcomes, prior research has not examined approach characteristics - excitement and approach toward pleasurable activities - in the first year of life. Therefore, the current study investigated whether 6-month maternal reported infant negative affect - a phenotypic marker of risk/susceptibility - interacted with 8-month observed parenting behaviors (positive parenting, negative parenting) to predict 12-month infant behavioral approach. Based a sample of mothers and their infants ( N =150), results indicated that negative parenting was inversely associated with subsequent approach for infants with high, but not low, levels of early negative affect. Similar results did not occur regarding positive parenting. These findings better fit a diathesis-stress model rather than a differential susceptibility model. Implications and limitations of these findings are discussed.
Beyond Natural Numbers: Negative Number Representation in Parietal Cortex
Blair, Kristen P.; Rosenberg-Lee, Miriam; Tsang, Jessica M.; Schwartz, Daniel L.; Menon, Vinod
2012-01-01
Unlike natural numbers, negative numbers do not have natural physical referents. How does the brain represent such abstract mathematical concepts? Two competing hypotheses regarding representational systems for negative numbers are a rule-based model, in which symbolic rules are applied to negative numbers to translate them into positive numbers when assessing magnitudes, and an expanded magnitude model, in which negative numbers have a distinct magnitude representation. Using an event-related functional magnetic resonance imaging design, we examined brain responses in 22 adults while they performed magnitude comparisons of negative and positive numbers that were quantitatively near (difference <4) or far apart (difference >6). Reaction times (RTs) for negative numbers were slower than positive numbers, and both showed a distance effect whereby near pairs took longer to compare. A network of parietal, frontal, and occipital regions were differentially engaged by negative numbers. Specifically, compared to positive numbers, negative number processing resulted in greater activation bilaterally in intraparietal sulcus (IPS), middle frontal gyrus, and inferior lateral occipital cortex. Representational similarity analysis revealed that neural responses in the IPS were more differentiated among positive numbers than among negative numbers, and greater differentiation among negative numbers was associated with faster RTs. Our findings indicate that despite negative numbers engaging the IPS more strongly, the underlying neural representation are less distinct than that of positive numbers. We discuss our findings in the context of the two theoretical models of negative number processing and demonstrate how multivariate approaches can provide novel insights into abstract number representation. PMID:22363276
Beyond natural numbers: negative number representation in parietal cortex.
Blair, Kristen P; Rosenberg-Lee, Miriam; Tsang, Jessica M; Schwartz, Daniel L; Menon, Vinod
2012-01-01
Unlike natural numbers, negative numbers do not have natural physical referents. How does the brain represent such abstract mathematical concepts? Two competing hypotheses regarding representational systems for negative numbers are a rule-based model, in which symbolic rules are applied to negative numbers to translate them into positive numbers when assessing magnitudes, and an expanded magnitude model, in which negative numbers have a distinct magnitude representation. Using an event-related functional magnetic resonance imaging design, we examined brain responses in 22 adults while they performed magnitude comparisons of negative and positive numbers that were quantitatively near (difference <4) or far apart (difference >6). Reaction times (RTs) for negative numbers were slower than positive numbers, and both showed a distance effect whereby near pairs took longer to compare. A network of parietal, frontal, and occipital regions were differentially engaged by negative numbers. Specifically, compared to positive numbers, negative number processing resulted in greater activation bilaterally in intraparietal sulcus (IPS), middle frontal gyrus, and inferior lateral occipital cortex. Representational similarity analysis revealed that neural responses in the IPS were more differentiated among positive numbers than among negative numbers, and greater differentiation among negative numbers was associated with faster RTs. Our findings indicate that despite negative numbers engaging the IPS more strongly, the underlying neural representation are less distinct than that of positive numbers. We discuss our findings in the context of the two theoretical models of negative number processing and demonstrate how multivariate approaches can provide novel insights into abstract number representation.
A Financial Market Model Incorporating Herd Behaviour.
Wray, Christopher M; Bishop, Steven R
2016-01-01
Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market
A Financial Market Model Incorporating Herd Behaviour
2016-01-01
Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents’ accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents’ accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the
Pautassi, Ricardo M; Nizhnikov, Michael E; Spear, Norman E
2009-06-01
The motivational effects of drugs play a key role during the transition from casual use to abuse and dependence. Ethanol reinforcement has been successfully studied through Pavlovian and operant conditioning in adult rats and mice genetically selected for their ready acceptance of ethanol. Another model for studying ethanol reinforcement is the immature (preweanling) rat, which consumes ethanol and exhibits the capacity to process tactile, odor and taste cues and transfer information between different sensorial modalities. This review describes the motivational effects of ethanol in preweanling, heterogeneous non-selected rats. Preweanlings exhibit ethanol-mediated conditioned taste avoidance and conditioned place aversion. Ethanol's appetitive effects, however, are evident when using first- and second-order conditioning and operant procedures. Ethanol also devalues the motivational representation of aversive stimuli, suggesting early negative reinforcement. It seems that preweanlings are highly sensitive not only to the aversive motivational effects of ethanol but also to its positive and negative (anti-anxiety) reinforcement potential. The review underscores the advantages of using a developing rat to evaluate alcohol's motivational effects.
Pautassi, Ricardo M.; Nizhnikov, Michael E.; Spear, Norman E.
2009-01-01
The motivational effects of drugs play a key role during the transition from casual use to abuse and dependence. Ethanol reinforcement has been successfully studied through Pavlovian and operant conditioning in adult rats and mice genetically selected for their ready acceptance of ethanol. Another model for studying ethanol reinforcement is the immature (preweanling) rat, which consumes ethanol and exhibits the capacity to process tactile, odor and taste cues and transfer information between different sensorial modalities. This review describes the motivational effects of ethanol in preweanling, heterogeneous non-selected rats. Preweanlings exhibit ethanol-mediated conditioned taste avoidance and conditioned place aversion. Ethanol's appetitive effects, however, are evident when using first- and second-order conditioning and operant procedures. Ethanol also devalues the motivational representation of aversive stimuli, suggesting early negative reinforcement. It seems that preweanlings are highly sensitive not only to the aversive motivational effects of ethanol but also to its positive and negative (anti-anxiety) reinforcement potential. The review underscores the advantages of using a developing rat to evaluate alcohol's motivational effects. PMID:19428502
Power Transmission From The ITER Model Negative Ion Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boilson, D.; Esch, H. P. L. de; Grand, C.
2007-08-10
In Cadarache development on negative ion sources is being carried out on the KAMABOKO III ion source on the MANTIS test bed. This is a model of the ion source designed for the neutral beam injectors of ITER. This ion source has been developed in collaboration with JAERI, Japan, who also designed and supplied the ion source. Its target performance is to accelerate a D- beam, with a current density of 200 A/m2 and <1 electron extracted per accelerated D- ion, at a source filling pressure of 0.3 Pa. For ITER a continuous ion beam must be assured for pulsemore » lengths of 1000 s, but beams of up to 3,600 s are also envisaged. The ion source is attached to a 3 grid 30 keV accelerator (also supplied by JAERI) and the accelerated negative ion current is determined from the energy deposited on a calorimeter. During long pulse operation ({<=}1000 s) it was found that the current density of both D- and H- beams, measured at the calorimeter was lower than expected and that a large discrepancy existed between the accelerated currents measured electrically and those transmitted to the calorimeter. The possibility that this discrepancy arose because the accelerated current included electrons (which would not be able to reach the calorimeter) was investigated and subsequently eliminated. Further studies have shown that the fraction of the electrical current reaching the calorimeter varies with the pulse length, which led to the suggestion that one or more of the accelerator grids were distorting due to the incident power during operation, leading to a progressive deterioration in the beam quality.. New extraction and acceleration grids have been designed and installed, which should have a better tolerance to thermal loads than those previously used. This paper describes the measurements of the power transmission and distribution using these grids.« less
Counihan, Timothy D.; Chapman, Colin G.
2018-01-01
The goals were to (i) determine if river discharge and water temperature during various early life history stages were predictors of age‐0 White Sturgeon, Acipenser transmontanus, recruitment, and (ii) provide an example of how over‐dispersed catch data, including data with many zero observations, can be used to better understand the effects of regulated rivers on the productivity of depressed sturgeon populations. An information theoretic approach was used to develop and select negative binomial and zero‐inflated negative binomial models that model the relation of age‐0 White Sturgeon survey data from three contiguous Columbia River reservoirs to river discharge and water temperature during spawning, egg incubation, larval, and post‐larval phases. Age‐0 White Sturgeon were collected with small mesh gill nets in The Dalles and John Day reservoirs from 1997 to 2014 and a bottom trawl in Bonneville Reservoir from 1989 to 2006. Results suggest that seasonal river discharge was positively correlated with age‐0 recruitment; notably that discharge, 16 June–31 July was positively correlated to age‐0 recruitment in all three reservoirs. The best approximating models for two of the three reservoirs also suggest that seasonal water temperature may be a determinant of age‐0 recruitment. Our research demonstrates how over‐dispersed catch data can be used to better understand the effects of environmental conditions on sturgeon populations caused by the construction and operation of dams.
Oral health of schoolchildren in Western Australia.
Arrow, P
2016-09-01
The West Australian School Dental Service (SDS) provides free, statewide, primary dental care to schoolchildren aged 5-17 years. This study reports on an evaluation of the oral health of children examined during the 2014 calendar year. Children were sampled, based on their date of birth, and SDS clinicians collected the clinical information. Weighted mean values of caries experience were presented. Negative binomial regression modelling was undertaken to test for factors of significance in the rate of caries occurrence. Data from children aged 5-15 years were used (girls = 4616, boys = 4900). Mean dmft (5-10-year-olds), 1.42 SE 0.03; mean DMFT (6-15-year-olds), 0.51 SE 0.01. Negative binomial regression model of permanent tooth caries found higher rates of caries in children who were from non-fluoridated areas (RR 2.1); Aboriginal (RR 2.4); had gingival inflammation (RR 1.5); lower ICSEA level (RR 1.4); and recalled at more than 24-month interval (RR 1.8). The study highlighted poor dental health associated with living in non-fluoridated areas, Aboriginal identity, poor oral hygiene, lower socioeconomic level and having extended intervals between dental checkups. Timely assessments and preventive measures targeted at groups, including extending community water fluoridation, may assist in further improving the oral health of children in Western Australia. © 2015 Australian Dental Association.
Random parameter models for accident prediction on two-lane undivided highways in India.
Dinu, R R; Veeraragavan, A
2011-02-01
Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.
A neuronal model of predictive coding accounting for the mismatch negativity.
Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas
2012-03-14
The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spiking excitatory and inhibitory neurons interconnected in a layered cortical architecture with distinct input, predictive, and prediction error units. A spike-timing dependent learning rule, relying upon NMDA receptor synaptic transmission, allows the network to adjust its internal predictions and use a memory of the recent past inputs to anticipate on future stimuli based on transition statistics. We demonstrate that this simple architecture can account for the major empirical properties of the MMN. These include a frequency-dependent response to rare deviants, a response to unexpected repeats in alternating sequences (ABABAA…), a lack of consideration of the global sequence context, a response to sound omission, and a sensitivity of the MMN to NMDA receptor antagonists. Novel predictions are presented, and a new magnetoencephalography experiment in healthy human subjects is presented that validates our key hypothesis: the MMN results from active cortical prediction rather than passive synaptic habituation.
Paula-Moraes, S; Burkness, E C; Hunt, T E; Wright, R J; Hein, G L; Hutchison, W D
2011-12-01
Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native pest of dry beans (Phaseolus vulgaris L.) and corn (Zea mays L.). As a result of larval feeding damage on corn ears, S. albicosta has a narrow treatment window; thus, early detection of the pest in the field is essential, and egg mass sampling has become a popular monitoring tool. Three action thresholds for field and sweet corn currently are used by crop consultants, including 4% of plants infested with egg masses on sweet corn in the silking-tasseling stage, 8% of plants infested with egg masses on field corn with approximately 95% tasseled, and 20% of plants infested with egg masses on field corn during mid-milk-stage corn. The current monitoring recommendation is to sample 20 plants at each of five locations per field (100 plants total). In an effort to develop a more cost-effective sampling plan for S. albicosta egg masses, several alternative binomial sampling plans were developed using Wald's sequential probability ratio test, and validated using Resampling for Validation of Sampling Plans (RVSP) software. The benefit-cost ratio also was calculated and used to determine the final selection of sampling plans. Based on final sampling plans selected for each action threshold, the average sample number required to reach a treat or no-treat decision ranged from 38 to 41 plants per field. This represents a significant savings in sampling cost over the current recommendation of 100 plants.
NASA Astrophysics Data System (ADS)
Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun
2017-03-01
To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.
NASA Astrophysics Data System (ADS)
Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang
2017-03-01
Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.
Negative Correlations in Visual Cortical Networks
Chelaru, Mircea I.; Dragoi, Valentin
2016-01-01
The amount of information encoded by cortical circuits depends critically on the capacity of nearby neurons to exhibit trial-to-trial (noise) correlations in their responses. Depending on their sign and relationship to signal correlations, noise correlations can either increase or decrease the population code accuracy relative to uncorrelated neuronal firing. Whereas positive noise correlations have been extensively studied using experimental and theoretical tools, the functional role of negative correlations in cortical circuits has remained elusive. We addressed this issue by performing multiple-electrode recording in the superficial layers of the primary visual cortex (V1) of alert monkey. Despite the fact that positive noise correlations decayed exponentially with the difference in the orientation preference between cells, negative correlations were uniformly distributed across the population. Using a statistical model for Fisher Information estimation, we found that a mild increase in negative correlations causes a sharp increase in network accuracy even when mean correlations were held constant. To examine the variables controlling the strength of negative correlations, we implemented a recurrent spiking network model of V1. We found that increasing local inhibition and reducing excitation causes a decrease in the firing rates of neurons while increasing the negative noise correlations, which in turn increase the population signal-to-noise ratio and network accuracy. Altogether, these results contribute to our understanding of the neuronal mechanism involved in the generation of negative correlations and their beneficial impact on cortical circuit function. PMID:25217468
Ferroelectric negative capacitance domain dynamics
NASA Astrophysics Data System (ADS)
Hoffmann, Michael; Khan, Asif Islam; Serrao, Claudy; Lu, Zhongyuan; Salahuddin, Sayeef; Pešić, Milan; Slesazeck, Stefan; Schroeder, Uwe; Mikolajick, Thomas
2018-05-01
Transient negative capacitance effects in epitaxial ferroelectric Pb(Zr0.2Ti0.8)O3 capacitors are investigated with a focus on the dynamical switching behavior governed by domain nucleation and growth. Voltage pulses are applied to a series connection of the ferroelectric capacitor and a resistor to directly measure the ferroelectric negative capacitance during switching. A time-dependent Ginzburg-Landau approach is used to investigate the underlying domain dynamics. The transient negative capacitance is shown to originate from reverse domain nucleation and unrestricted domain growth. However, with the onset of domain coalescence, the capacitance becomes positive again. The persistence of the negative capacitance state is therefore limited by the speed of domain wall motion. By changing the applied electric field, capacitor area or external resistance, this domain wall velocity can be varied predictably over several orders of magnitude. Additionally, detailed insights into the intrinsic material properties of the ferroelectric are obtainable through these measurements. A new method for reliable extraction of the average negative capacitance of the ferroelectric is presented. Furthermore, a simple analytical model is developed, which accurately describes the negative capacitance transient time as a function of the material properties and the experimental boundary conditions.
Archer, A.W.; Maples, C.G.
1989-01-01
Numerous departures from ideal relationships are revealed by Monte Carlo simulations of widely accepted binomial coefficients. For example, simulations incorporating varying levels of matrix sparseness (presence of zeros indicating lack of data) and computation of expected values reveal that not only are all common coefficients influenced by zero data, but also that some coefficients do not discriminate between sparse or dense matrices (few zero data). Such coefficients computationally merge mutually shared and mutually absent information and do not exploit all the information incorporated within the standard 2 ?? 2 contingency table; therefore, the commonly used formulae for such coefficients are more complicated than the actual range of values produced. Other coefficients do differentiate between mutual presences and absences; however, a number of these coefficients do not demonstrate a linear relationship to matrix sparseness. Finally, simulations using nonrandom matrices with known degrees of row-by-row similarities signify that several coefficients either do not display a reasonable range of values or are nonlinear with respect to known relationships within the data. Analyses with nonrandom matrices yield clues as to the utility of certain coefficients for specific applications. For example, coefficients such as Jaccard, Dice, and Baroni-Urbani and Buser are useful if correction of sparseness is desired, whereas the Russell-Rao coefficient is useful when sparseness correction is not desired. ?? 1989 International Association for Mathematical Geology.
Ivanova, Iryna V; Tasca, Giorgio A; Hammond, Nicole; Balfour, Louise; Ritchie, Kerri; Koszycki, Diana; Bissada, Hany
2015-03-01
This study evaluated the validity of the interpersonal model of binge-eating disorder (BED) psychopathology in a clinical sample of women with BED. Data from a cross-sectional sample of 255 women with BED were examined for the direct effects of interpersonal problems on BED symptoms and psychopathology, and indirect effects mediated by negative affect. Structural equation modelling analyses demonstrated that higher levels of interpersonal problems were associated with greater negative affect, and greater negative affect was associated with higher frequency of BED symptoms and psychopathology. There was a significant indirect effect of interpersonal problems on BED symptoms and psychopathology mediated through negative affect. Interpersonal problems may lead to greater BED symptoms and psychopathology, and this relationship may be partially explained by elevated negative affect. The results of the study are the first to provide support for the interpersonal model of BED symptoms and psychopathology in a clinical sample of women. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.
Analyzing hospitalization data: potential limitations of Poisson regression.
Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R
2015-08-01
Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Distribution of chewing lice upon the polygynous peacock Pavo cristatus.
Stewart, I R; Clark, F; Petrie, M
1996-04-01
An opportunistic survey of louse distribution upon the peacock Pavo cristatus was undertaken following a cull of 23 birds from an English zoo. After complete skin and feather dissolution, 2 species of lice were retrieved, Goniodes pavonis and Amyrsidea minuta. The distribution of both louse species could be described by a negative binomial model. The significance of this is discussed in relation to transmission dynamics of lice in the atypical avian mating system found in the peacock, which involves no male parental care.
Ahmad, Aftab; Khan, Vikram; Badola, Smita; Arya, Gaurav; Bansal, Nayanci; Saxena, A. K.
2010-01-01
The prevalence, intensities of infestation, range of infestation and population composition of two phthirapteran species, Ardeicola expallidus Blagoveshtchensky (Phthiraptera: Philopteridae) and Ciconiphilus decimfasciatus Boisduval and Lacordaire (Menoponidae) on seventy cattle egrets were recorded during August 2004 to March 2005, in India. The frequency distribution patterns of both the species were skewed but did not correspond to the negative binomial model. The oviposition sites, egg laying patterns and the nature of the eggs of the two species were markedly different. PMID:21067416
An integrated model for detecting significant chromatin interactions from high-resolution Hi-C data
Carty, Mark; Zamparo, Lee; Sahin, Merve; González, Alvaro; Pelossof, Raphael; Elemento, Olivier; Leslie, Christina S.
2017-01-01
Here we present HiC-DC, a principled method to estimate the statistical significance (P values) of chromatin interactions from Hi-C experiments. HiC-DC uses hurdle negative binomial regression account for systematic sources of variation in Hi-C read counts—for example, distance-dependent random polymer ligation and GC content and mappability bias—and model zero inflation and overdispersion. Applied to high-resolution Hi-C data in a lymphoblastoid cell line, HiC-DC detects significant interactions at the sub-topologically associating domain level, identifying potential structural and regulatory interactions supported by CTCF binding sites, DNase accessibility, and/or active histone marks. CTCF-associated interactions are most strongly enriched in the middle genomic distance range (∼700 kb–1.5 Mb), while interactions involving actively marked DNase accessible elements are enriched both at short (<500 kb) and longer (>1.5 Mb) genomic distances. There is a striking enrichment of longer-range interactions connecting replication-dependent histone genes on chromosome 6, potentially representing the chromatin architecture at the histone locus body. PMID:28513628
Examination of a perceived cost model of employees' negative feedback-seeking behavior.
Lu, Kuo-Ming; Pan, Su-Ying; Cheng, Jen-Wei
2011-01-01
The present study extends the feedback-seeking behavior literature by investigating how supervisor-related antecedents (i.e., supervisors' expert power, reflected appraisals of supervisors, and supervisors' emotional intelligence) influence subordinates' negative feedback-seeking behavior (NFSB) through different cost/value perceptions (i.e., expectancy value, self-presentation cost, and ego cost). Using data collected from 216 supervisor-subordinate dyads from various industries in Taiwan, we employ structural equation modeling analysis to test our hypotheses. The results show that expectancy value mediates the relationship between supervisor expert power and subordinates' NFSB. Moreover, self-presentation cost mediates the relationship between reflected appraisals of supervisors' and subordinates' NFSB. Theoretical and practical implications of this study are also discussed.
Onie, Sandersan; Most, Steven B
2017-08-01
Attentional biases to threatening stimuli have been implicated in various emotional disorders. Theoretical approaches often carry the implicit assumption that various attentional bias measures tap into the same underlying construct, but attention itself is not a unitary mechanism. Most attentional bias tasks-such as the dot probe (DP)-index spatial attention, neglecting other potential attention mechanisms. We compared the DP with emotion-induced blindness (EIB), which appears to be mechanistically distinct, and examined the degree to which these tasks predicted (a) negative affect, (b) persistent negative thought (i.e., worry, rumination), and (c) each other. The 2 tasks did not predict each other, and they uniquely accounted for negative affect in a regression analysis. The relationship between EIB and negative affect was mediated by persistent negative thought, whereas that between the DP and negative affect was not, suggesting that EIB may be more intimately linked than spatial attention with persistent negative thought. Experiment 2 revealed EIB to have a favorable test-retest reliability. Together, these findings underscore the importance of distinguishing between attentional bias mechanisms when constructing theoretical models of, and interventions that target, particular emotional disorders. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Rasch Model Analysis with the BICAL Computer Program
1976-09-01
and persons which lead to measures that persist from trial to trial . The measurement model is essential in this process because it provides a framework...and his students. Section 15 two derives the estimating equations for the Bernoulli (i.e. one trial per task) form : " and then generalizes to the...Binomial form (several trials per task). Finall) goodness of fit tests are presented for assessing the adequacy of the calibration. t { ) I I I 41 CHAPTER
ERIC Educational Resources Information Center
Mezulis, Amy H.; Hyde, Janet Shibley; Abramson, Lyn Y.
2006-01-01
Cognitive models of depression have been well supported with adults, but the developmental origins of cognitive vulnerability are not well understood. The authors hypothesized that temperament, parenting, and negative life events in childhood would contribute to the development of cognitive style, with withdrawal negativity and negative parental…
Predictive Model of Linear Antimicrobial Peptides Active against Gram-Negative Bacteria.
Vishnepolsky, Boris; Gabrielian, Andrei; Rosenthal, Alex; Hurt, Darrell E; Tartakovsky, Michael; Managadze, Grigol; Grigolava, Maya; Makhatadze, George I; Pirtskhalava, Malak
2018-05-29
Antimicrobial peptides (AMPs) have been identified as a potential new class of anti-infectives for drug development. There are a lot of computational methods that try to predict AMPs. Most of them can only predict if a peptide will show any antimicrobial potency, but to the best of our knowledge, there are no tools which can predict antimicrobial potency against particular strains. Here we present a predictive model of linear AMPs being active against particular Gram-negative strains relying on a semi-supervised machine-learning approach with a density-based clustering algorithm. The algorithm can well distinguish peptides active against particular strains from others which may also be active but not against the considered strain. The available AMP prediction tools cannot carry out this task. The prediction tool based on the algorithm suggested herein is available on https://dbaasp.org.
Gaussian Process Regression Model in Spatial Logistic Regression
NASA Astrophysics Data System (ADS)
Sofro, A.; Oktaviarina, A.
2018-01-01
Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.
Social vulnerability and the natural and built environment: a model of flood casualties in Texas.
Zahran, Sammy; Brody, Samuel D; Peacock, Walter Gillis; Vedlitz, Arnold; Grover, Himanshu
2008-12-01
Studies on the impacts of hurricanes, tropical storms, and tornados indicate that poor communities of colour suffer disproportionately in human death and injury.(2) Few quantitative studies have been conducted on the degree to which flood events affect socially vulnerable populations. We address this research void by analysing 832 countywide flood events in Texas from 1997-2001. Specifically, we examine whether geographic localities characterised by high percentages of socially vulnerable populations experience significantly more casualties due to flood events, adjusting for characteristics of the natural and built environment. Zero-inflated negative binomial regression models indicate that the odds of a flood casualty increase with the level of precipitation on the day of a flood event, flood duration, property damage caused by the flood, population density, and the presence of socially vulnerable populations. Odds decrease with the number of dams, the level of precipitation on the day before a recorded flood event, and the extent to which localities have enacted flood mitigation strategies. The study concludes with comments on hazard-resilient communities and protection of casualty-prone populations.
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Petit, J. P.; D'Agostini, G.
2014-10-01
An extension of a previously published model of a bimetric Universe is presented, where the speeds of light associated to positive and negative mass species are different. As shown earlier, the asymmetry of the model explains the acceleration of the positive species, while the negative one slows down. Asymmetry affects scale factors linked to lengths, times and speeds of light; so that if a mass inversion of a craft can be achieved, then interstellar travels would become non-impossible at a velocity less than the speed of light of the negative sector, and possibly much higher than that of the positive sector.
Appraisals of negative events by preadolescent children of divorce.
Sheets, V; Sandler, I; West, S G
1996-10-01
This study investigated the appraisals of the significance of negative events made by 256 preadolescent children of divorce. Appraisals were assessed by a 24-item self-report scale. Confirmatory factor analysis of this scale found support for a 3-dimensional model: negative self-appraisal, negative other-appraisal, and material loss. Differentiation between the dimensions of appraisal increased with age in both cross-sectional and over-time data. Evidence for convergent and discriminant validity of the self-report measure of appraisals was found with scores derived from children's open-ended descriptions of their appraisals. Cross-sectional structural equation models found significant paths between negative appraisal and psychological symptoms, over and above the direct effects of the traditional life event measure of stress. Structural equation modeling of longitudinal (5.5 months) data found a significant path from Time 1 appraisal to Time 2 anxiety for the older children.
Indicators of Terrorism Vulnerability in Africa
2015-03-26
the terror threat and vulnerabilities across Africa. Key words: Terrorism, Africa, Negative Binomial Regression, Classification Tree iv I would like...31 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Log -likelihood...70 viii Page 5.3 Classification Tree Description
Lee, Yi-Chih; Wu, Wei-Li
2015-08-01
Emotions play an important role in human behavior. Negative emotions resulting from medical disputes are problems for medical personnel to solve but also have a significant impact on a hospital's reputation and people's trust in the hospital. One medical dispute case was chosen from an Internet news source to assess the correlation between people's negative emotions and negative online word-of-mouth. Convenience sampling was used in school faculties and university students who had shared their medical treatment experiences online were the research participants. A total of 221 Taiwanese participants volunteered (158 women, 63 men; ages: 26.7% under 19, 22.6% 20-29, 30.8% 30-39,19.9% over 40). Four negative emotions were measured using rating scales: uncertainty, anger, disappointment, and sadness. Four negative online word-of-mouth measures were: venting, advice search, helping receiver, and revenge. A modeled relationship was assessed by partial least square method (PLS). Then, people's positive emotions were further analyzed to assess changes after spreading negative word-of-mouth. The results showed that uncertainty had a positive effect on venting and advice search. People who felt anger or regret spread word-of-mouth in order to help the receiver. Disappointment may trigger the revenge behavior of negative word-of-mouth. Negative emotions could be relieved after engaging in the behavior of helping the receiver.
A novel patient-derived xenograft model for claudin-low triple-negative breast cancer.
Matossian, Margarite D; Burks, Hope E; Bowles, Annie C; Elliott, Steven; Hoang, Van T; Sabol, Rachel A; Pashos, Nicholas C; O'Donnell, Benjamen; Miller, Kristin S; Wahba, Bahia M; Bunnell, Bruce A; Moroz, Krzysztof; Zea, Arnold H; Jones, Steven D; Ochoa, Augusto C; Al-Khami, Amir A; Hossain, Fokhrul; Riker, Adam I; Rhodes, Lyndsay V; Martin, Elizabeth C; Miele, Lucio; Burow, Matthew E; Collins-Burow, Bridgette M
2018-06-01
Triple-negative breast cancer (TNBC) subtypes are clinically aggressive and cannot be treated with targeted therapeutics commonly used in other breast cancer subtypes. The claudin-low (CL) molecular subtype of TNBC has high rates of metastases, chemoresistance and recurrence. There exists an urgent need to identify novel therapeutic targets in TNBC; however, existing models utilized in target discovery research are limited. Patient-derived xenograft (PDX) models have emerged as superior models for target discovery experiments because they recapitulate features of patient tumors that are limited by cell-line derived xenograft methods. We utilize immunohistochemistry, qRT-PCR and Western Blot to visualize tumor architecture, cellular composition, genomic and protein expressions of a new CL-TNBC PDX model (TU-BcX-2O0). We utilize tissue decellularization techniques to examine extracellular matrix composition of TU-BcX-2O0. Our laboratory successfully established a TNBC PDX tumor, TU-BCX-2O0, which represents a CL-TNBC subtype and maintains this phenotype throughout subsequent passaging. We dissected TU-BCx-2O0 to examine aspects of this complex tumor that can be targeted by developing therapeutics, including the whole and intact breast tumor, specific cell populations within the tumor, and the extracellular matrix. Here, we characterize a claudin-low TNBC patient-derived xenograft model that can be utilized for therapeutic research studies.
Models of Eucalypt phenology predict bat population flux.
Giles, John R; Plowright, Raina K; Eby, Peggy; Peel, Alison J; McCallum, Hamish
2016-10-01
Fruit bats (Pteropodidae) have received increased attention after the recent emergence of notable viral pathogens of bat origin. Their vagility hinders data collection on abundance and distribution, which constrains modeling efforts and our understanding of bat ecology, viral dynamics, and spillover. We addressed this knowledge gap with models and data on the occurrence and abundance of nectarivorous fruit bat populations at 3 day roosts in southeast Queensland. We used environmental drivers of nectar production as predictors and explored relationships between bat abundance and virus spillover. Specifically, we developed several novel modeling tools motivated by complexities of fruit bat foraging ecology, including: (1) a dataset of spatial variables comprising Eucalypt-focused vegetation indices, cumulative precipitation, and temperature anomaly; (2) an algorithm that associated bat population response with spatial covariates in a spatially and temporally relevant way given our current understanding of bat foraging behavior; and (3) a thorough statistical learning approach to finding optimal covariate combinations. We identified covariates that classify fruit bat occupancy at each of our three study roosts with 86-93% accuracy. Negative binomial models explained 43-53% of the variation in observed abundance across roosts. Our models suggest that spatiotemporal heterogeneity in Eucalypt-based food resources could drive at least 50% of bat population behavior at the landscape scale. We found that 13 spillover events were observed within the foraging range of our study roosts, and they occurred during times when models predicted low population abundance. Our results suggest that, in southeast Queensland, spillover may not be driven by large aggregations of fruit bats attracted by nectar-based resources, but rather by behavior of smaller resident subpopulations. Our models and data integrated remote sensing and statistical learning to make inferences on bat ecology
Impact of early childhood caries on oral health-related quality of life of preschool children.
Li, M Y; Zhi, Q H; Zhou, Y; Qiu, R M; Lin, H C
2015-03-01
Child oral health-related quality of life (COHRQoL) has been assessed in developed areas; however, it remains unstudied in mainland China. Studies on COHRQoL would benefit a large number of children in China suffering from oral health problems such as dental caries. This study explored the relationship between COHRQoL and early childhood caries, adjusted by socioeconomic factors, in 3- to 4-year-old children in a region of southern China. In this study, 1062 children aged 3-4 years were recruited by cluster sampling and their oral health statuses were examined by a trained dentist. The Chinese version of the Early Childhood Oral Health Impact Scale (ECOHIS) and questions about the children's socioeconomic conditions were completed by the children's parents. A negative binomial regression analysis was used to assess the prevalence of early childhood caries among the children and its influence on COHRQoL. The total ECOHIS scores of the returned scale sets ranged from 0 to 31, and their average scores was 3.1±5.1. The negative binomial analysis showed that the dmfs indices were significantly associated with the ECOHIS score and subscale scores (P<0.05). The multivariate adjusted model showed that a higher dmft index was associated with greater negative impact on COHRQoL (RR = 1.10; 95% CI = 1.07, 1.13; P < 0.05). However, demographic and socioeconomic factors were not associated with COHRQoL (P>0.05). The severity of early childhood caries has a negative impact on the oral health-related quality of life of preschool children and their parents.
McCarthy, Gerard; Maughan, Barbara
2010-09-01
This study investigated links between internal working models of attachment and the quality of adult love relationships in a high risk sample of women (n = 34), all of whom reported negative parenting in childhood. Half of the sample was identified as having a history of satisfying adult love relationships, while the remainder had experienced ongoing adult relationship problems. Measures of internal working models of attachment were made using the Adult Attachment Interview (AAI). A strong association was found between attachment classifications and the quality of adult love relationships. In addition, women with satisfying love relationships demonstrated significantly higher coherence of mind ratings than those with poor relationship histories. Insecure working models of attachment were associated with problems in adult love relationships. Although secure/autonomous attachment status was linked to optimal adult relationship outcomes, some women with a history of satisfying love relationships had insecure working models of attachment. These results suggest that the ways that adults process early experiences may influence later psychosocial functioning.
Austin, Shamly; Qu, Haiyan; Shewchuk, Richard M
2012-10-01
To examine the association between adherence to physical activity guidelines and health-related quality of life (HRQOL) among individuals with arthritis. A cross-sectional sample with 33,071 US adults, 45 years or older with physician-diagnosed arthritis was obtained from 2007 Behavioral Risk Factor Surveillance System survey. We conducted negative binomial regression analysis to examine HRQOL as a function of adherence to physical activity guidelines controlling for physicians' recommendations for physical activity, age, sex, race, education, marital status, employment, annual income, health insurance, personal physician, emotional support, body mass index, activity limitations, health status, and co-morbidities based on Behavioral Model of Health Services Utilization. Descriptive statistics showed that 60% adults with arthritis did not adhere to physical activity guidelines, mean physically and mentally unhealthy days were 7.7 and 4.4 days, respectively. Results from negative binomial regression indicated that individuals who did not adhere to physical activity guidelines had 1.14 days more physically unhealthy days and 1.12 days more mentally unhealthy days than those who adhered controlling for covariates. Adherence to physical activity is important to improve HRQOL for individuals with arthritis. However, adherence is low among this population. Interventions are required to engage individuals with arthritis in physical activity.
Simulations of negative hydrogen ion sources
NASA Astrophysics Data System (ADS)
Demerdjiev, A.; Goutev, N.; Tonev, D.
2018-05-01
The development and the optimisation of negative hydrogen/deuterium ion sources goes hand in hand with modelling. In this paper a brief introduction on the physics and types of different sources, and on the Kinetic and Fluid theories for plasma description is made. Examples of some recent models are considered whereas the main emphasis is on the model behind the concept and design of a matrix source of negative hydrogen ions. At the Institute for Nuclear Research and Nuclear Energy of the Bulgarian Academy of Sciences a new cyclotron center is under construction which opens new opportunities for research. One of them is the development of plasma sources for additional proton beam acceleration. We have applied the modelling technique implemented in the aforementioned model of the matrix source to a microwave plasma source exemplifying a plasma filled array of cavities made of a dielectric material with high permittivity. Preliminary results for the distribution of the plasma parameters and the φ component of the electric field in the plasma are obtained.
Ruminative self-focus, negative life events, and negative affect
Moberly, Nicholas J.; Watkins, Edward R.
2008-01-01
Ruminative thinking is believed to exacerbate the psychological distress that follows stressful life events. An experience-sampling study was conducted in which participants recorded negative life events, ruminative self-focus, and negative affect eight times daily over one week. Occasions when participants reported a negative event were marked by higher levels of negative affect. Additionally, negative events were prospectively associated with higher levels of negative affect at the next sampling occasion, and this relationship was partially mediated by momentary ruminative self-focus. Depressive symptoms were associated with more frequent negative events, but not with increased reactivity to negative events. Trait rumination was associated with reports of more severe negative events and increased reactivity to negative events. These results suggest that the extent to which a person engages in ruminative self-focus after everyday stressors is an important determinant of the degree of distress experienced after such events. Further, dispositional measures of rumination predict mood reactivity to everyday stressors in a non-clinical sample. PMID:18684437
Goodwin, Laura; Fairclough, Stephen H; Poole, Helen M
2013-06-01
Kolk et al.'s model of symptom perception underlines the effects of trait negative affect, selective attention and external stressors. The current study tested this model in 263 males and 498 females from an occupational sample. Trait negative affect was associated with symptom reporting in females only, and selective attention and psychological job demands were associated with symptom reporting in both genders. Health anxiety was associated with symptom reporting in males only. Future studies might consider the inclusion of selective attention, which was more strongly associated with symptom reporting than negative affect. Psychological job demands appear to influence symptom reporting in both males and females.
Determinants of The Grade A Embryos in Infertile Women; Zero-Inflated Regression Model.
Almasi-Hashiani, Amir; Ghaheri, Azadeh; Omani Samani, Reza
2017-10-01
In assisted reproductive technology, it is important to choose high quality embryos for embryo transfer. The aim of the present study was to determine the grade A embryo count and factors related to it in infertile women. This historical cohort study included 996 infertile women. The main outcome was the number of grade A embryos. Zero-Inflated Poisson (ZIP) regression and Zero-Inflated Negative Binomial (ZINB) regression were used to model the count data as it contained excessive zeros. Stata software, version 13 (Stata Corp, College Station, TX, USA) was used for all statistical analyses. After adjusting for potential confounders, results from the ZINB model show that for each unit increase in the number 2 pronuclear (2PN) zygotes, we get an increase of 1.45 times as incidence rate ratio (95% confidence interval (CI): 1.23-1.69, P=0.001) in the expected grade A embryo count number, and for each increase in the cleavage day we get a decrease 0.35 times (95% CI: 0.20-0.61, P=0.001) in expected grade A embryo count. There is a significant association between both the number of 2PN zygotes and cleavage day with the number of grade A embryos in both ZINB and ZIP regression models. The estimated coefficients are more plausible than values found in earlier studies using less relevant models. Copyright© by Royan Institute. All rights reserved.
2013-01-01
Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699
Wall, Stephen P; Lee, David C; Frangos, Spiros G; Sethi, Monica; Heyer, Jessica H; Ayoung-Chee, Patricia; DiMaggio, Charles J
2016-01-01
We conducted individual and ecologic analyses of prospectively collected data from 839 injured bicyclists who collided with motorized vehicles and presented to Bellevue Hospital, an urban Level-1 trauma center in New York City, from December 2008 to August 2014. Variables included demographics, scene information, rider behaviors, bicycle route availability, and whether the collision occurred before the road segment was converted to a bicycle route. We used negative binomial modeling to assess the risk of injury occurrence following bicycle path or lane implementation. We dichotomized U.S. National Trauma Data Bank Injury Severity Scores (ISS) into none/mild (0-8) versus moderate, severe, or critical (>8) and used adjusted multivariable logistic regression to model the association of ISS with collision proximity to sharrows (i.e., bicycle lanes designated for sharing with cars), painted bicycle lanes, or physically protected paths. Negative binomial modeling of monthly counts, while adjusting for pedestrian activity, revealed that physically protected paths were associated with 23% fewer injuries. Painted bicycle lanes reduced injury risk by nearly 90% (IDR 0.09, 95% CI 0.02-0.33). Holding all else equal, compared to no bicycle route, a bicycle injury nearby sharrows was nearly twice as likely to be moderate, severe, or critical (adjusted odds ratio 1.94; 95% confidence interval (CI) 0.91-4.15). Painted bicycle lanes and physically protected paths were 1.52 (95% CI 0.85-2.71) and 1.66 (95% CI 0.85-3.22) times as likely to be associated with more than mild injury respectively.
Milner, Allison; Butterworth, Peter; Bentley, Rebecca; Kavanagh, Anne M; LaMontagne, Anthony D
2015-05-15
Sickness absence is associated with adverse health, organizational, and societal outcomes. Using data from a longitudinal cohort study of working Australians (the Household, Income and Labour Dynamics in Australia (HILDA) Survey), we examined the relationship between changes in individuals' overall psychosocial job quality and variation in sickness absence. The outcome variables were paid sickness absence (yes/no) and number of days of paid sickness absence in the past year (2005-2012). The main exposure variable was psychosocial job quality, measured using a psychosocial job quality index (levels of job control, demands and complexity, insecurity, and perceptions of unfair pay). Analysis was conducted using longitudinal fixed-effects logistic regression models and negative binomial regression models. There was a dose-response relationship between the number of psychosocial job stressors reported by an individual and the odds of paid sickness absence (1 adversity: odds ratio (OR) = 1.26, 95% confidence interval (CI): 1.09, 1.45 (P = 0.002); 2 adversities: OR = 1.28, 95% CI: 1.09, 1.51 (P = 0.002); ≥3 adversities: OR = 1.58, 95% CI: 1.29, 1.94 (P < 0.001)). The negative binomial regression models also indicated that respondents reported a greater number of days of sickness absence in response to worsening psychosocial job quality. These results suggest that workplace interventions aiming to improve the quality of work could help reduce sickness absence. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Wang, Yang; Wang, Xiaohua; Liu, Fangnan; Jiang, Xiaoning; Xiao, Yun; Dong, Xuehan; Kong, Xianglei; Yang, Xuemei; Tian, Donghua; Qu, Zhiyong
2016-01-01
Few studies have looked at the relationship between psychological and the mental health status of pregnant women in rural China. The current study aims to explore the potential mediating effect of negative automatic thoughts between negative life events and antenatal depression. Data were collected in June 2012 and October 2012. 495 rural pregnant women were interviewed. Depressive symptoms were measured by the Edinburgh postnatal depression scale, stresses of pregnancy were measured by the pregnancy pressure scale, negative automatic thoughts were measured by the automatic thoughts questionnaire, and negative life events were measured by the life events scale for pregnant women. We used logistic regression and path analysis to test the mediating effect. The prevalence of antenatal depression was 13.7%. In the logistic regression, the only socio-demographic and health behavior factor significantly related to antenatal depression was sleep quality. Negative life events were not associated with depression in the fully adjusted model. Path analysis showed that the eventual direct and general effects of negative automatic thoughts were 0.39 and 0.51, which were larger than the effects of negative life events. This study suggested that there was a potentially significant mediating effect of negative automatic thoughts. Pregnant women who had lower scores of negative automatic thoughts were more likely to suffer less from negative life events which might lead to antenatal depression.
Khan, Anzalee; Liharska, Lora; Harvey, Philip D; Atkins, Alexandra; Ulshen, Daniel; Keefe, Richard S E
2017-12-01
Objective: Recognizing the discrete dimensions that underlie negative symptoms in schizophrenia and how these dimensions are understood across localities might result in better understanding and treatment of these symptoms. To this end, the objectives of this study were to 1) identify the Positive and Negative Syndrome Scale negative symptom dimensions of expressive deficits and experiential deficits and 2) analyze performance on these dimensions over 15 geographical regions to determine whether the items defining them manifest similar reliability across these regions. Design: Data were obtained for the baseline Positive and Negative Syndrome Scale visits of 6,889 subjects across 15 geographical regions. Using confirmatory factor analysis, we examined whether a two-factor negative symptom structure that is found in schizophrenia (experiential deficits and expressive deficits) would be replicated in our sample, and using differential item functioning, we tested the degree to which specific items from each negative symptom subfactor performed across geographical regions in comparison with the United States. Results: The two-factor negative symptom solution was replicated in this sample. Most geographical regions showed moderate-to-large differential item functioning for Positive and Negative Syndrome Scale expressive deficit items, especially N3 Poor Rapport, as compared with Positive and Negative Syndrome Scale experiential deficit items, showing that these items might be interpreted or scored differently in different regions. Across countries, except for India, the differential item functioning values did not favor raters in the United States. Conclusion: These results suggest that the Positive and Negative Syndrome Scale negative symptom factor can be better represented by a two-factor model than by a single-factor model. Additionally, the results show significant differences in responses to items representing the Positive and Negative Syndrome Scale expressive
Christensen, G D; Simpson, W A; Younger, J J; Baddour, L M; Barrett, F F; Melton, D M; Beachey, E H
1985-01-01
The adherence of coagulase-negative staphylococci to smooth surfaces was assayed by measuring the optical densities of stained bacterial films adherent to the floors of plastic tissue culture plates. The optical densities correlated with the weight of the adherent bacterial film (r = 0.906; P less than 0.01). The measurements also agreed with visual assessments of bacterial adherence to culture tubes, microtiter plates, and tissue culture plates. Selected clinical strains were passed through a mouse model for foreign body infections and a rat model for catheter-induced endocarditis. The adherence measurements of animal passed strains remained the same as those of the laboratory-maintained parent strain. Spectrophotometric classification of coagulase-negative staphylococci into nonadherent and adherent categories according to these measurements had a sensitivity, specificity, and accuracy of 90.6, 80.8, and 88.4%, respectively. We examined a previously described collection of 127 strains of coagulase-negative staphylococci isolated from an outbreak of intravascular catheter-associated sepsis; strains associated with sepsis were more adherent than blood culture contaminants and cutaneous strains (P less than 0.001). We also examined a collection of 84 strains isolated from pediatric patients with cerebrospinal fluid (CSF) shunts; once again, pathogenic strains were more adherent than were CSF contaminants (P less than 0.01). Finally, we measured the adherence of seven endocarditis strains. As opposed to strains associated with intravascular catheters and CSF shunts, endocarditis strains were less adherent than were saprophytic strains of coagulase-negative staphylococci. The optical densities of bacterial films adherent to plastic tissue culture plates serve as a quantitative model for the study of the adherence of coagulase-negative staphylococci to medical devices, a process which may be important in the pathogenesis of foreign body infections. Images PMID:3905855
The relationships between air exposure, negative pressure, and hemolysis.
Pohlmann, Joshua R; Toomasian, John M; Hampton, Claire E; Cook, Keith E; Annich, Gail M; Bartlett, Robert H
2009-01-01
The purpose of this study was to describe the hemolytic effects of both negative pressure and an air-blood interface independently and in combination in an in vitro static blood model. Samples of fresh ovine or human blood (5 ml) were subjected to a bubbling air interface (0-100 ml/min) or negative pressure (0-600 mm Hg) separately, or in combination, for controlled periods of time and analyzed for hemolysis. Neither negative pressure nor an air interface alone increased hemolysis. However, when air and negative pressure were combined, hemolysis increased as a function of negative pressure, the air interface, and time. Moreover, when blood samples were exposed to air before initiating the test, hemolysis was four to five times greater than samples not preexposed to air. When these experiments were repeated using freshly drawn human blood, the same phenomena were observed, but the hemolysis was significantly higher than that observed in sheep blood. In this model, hemolysis is caused by combined air and negative pressure and is unrelated to either factor alone.
The Relationships between Air Exposure, Negative Pressure and Hemolysis
Pohlmann, Joshua R.; Toomasian, John M.; Hampton, Claire E.; Cook, Keith E.; Annich, Gail M.; Bartlett, Robert H.
2013-01-01
The purpose of this study was to describe the hemolytic effects of both negative pressure and an air-blood interface independently and in combination in an in-vitro static blood model. Samples of fresh ovine or human blood (5 mL) were subjected to a bubbling air interface (0–100 mL/min) or negative pressure (0–600 mmHg) separately, or in combination, for controlled periods of time, and analyzed for hemolysis. Neither negative pressure nor an air interface alone increased hemolysis. However, when air and negative pressure were combined, hemolysis increased as a function of negative pressure, the air interface, and time. Moreover, when blood samples were exposed to air prior to initiating the test, hemolysis was 4–5 times greater than samples not pre-exposed to air. When these experiments were repeated using freshly drawn human blood the same phenomena were observed, but the hemolysis was significantly higher than that observed in sheep blood. In this model, hemolysis is caused by combined air and negative pressure and is unrelated to either factor alone. PMID:19730004
On the colour variations of negative superhumps
NASA Astrophysics Data System (ADS)
Imada, Akira; Yanagisawa, Kenshi; Kawai, Nobuyuki
2018-06-01
We present simultaneous g΄, Rc, and Ic photometry of the notable dwarf nova ER UMa during the 2011 season. Our photometry revealed that the brightness maxima of negative superhumps coincide with the bluest peaks in g΄ - Ic colour variations. We also found that the amplitudes of negative superhumps are the largest in the g΄ band. These observed properties are significantly different from those observed in early and positive superhumps. Our findings are consistent with a tilted disk model as the light source of negative superhumps.
Di, Yanming; Schafer, Daniel W.; Wilhelm, Larry J.; Fox, Samuel E.; Sullivan, Christopher M.; Curzon, Aron D.; Carrington, James C.; Mockler, Todd C.; Chang, Jeff H.
2011-01-01
GENE-counter is a complete Perl-based computational pipeline for analyzing RNA-Sequencing (RNA-Seq) data for differential gene expression. In addition to its use in studying transcriptomes of eukaryotic model organisms, GENE-counter is applicable for prokaryotes and non-model organisms without an available genome reference sequence. For alignments, GENE-counter is configured for CASHX, Bowtie, and BWA, but an end user can use any Sequence Alignment/Map (SAM)-compliant program of preference. To analyze data for differential gene expression, GENE-counter can be run with any one of three statistics packages that are based on variations of the negative binomial distribution. The default method is a new and simple statistical test we developed based on an over-parameterized version of the negative binomial distribution. GENE-counter also includes three different methods for assessing differentially expressed features for enriched gene ontology (GO) terms. Results are transparent and data are systematically stored in a MySQL relational database to facilitate additional analyses as well as quality assessment. We used next generation sequencing to generate a small-scale RNA-Seq dataset derived from the heavily studied defense response of Arabidopsis thaliana and used GENE-counter to process the data. Collectively, the support from analysis of microarrays as well as the observed and substantial overlap in results from each of the three statistics packages demonstrates that GENE-counter is well suited for handling the unique characteristics of small sample sizes and high variability in gene counts. PMID:21998647
Xie, Haiyi; Tao, Jill; McHugo, Gregory J; Drake, Robert E
2013-07-01
Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit. Copyright © 2013 Elsevier Inc. All rights reserved.
Singh, Balraj; Shamsnia, Anna; Raythatha, Milan R.; Milligan, Ryan D.; Cady, Amanda M.; Madan, Simran; Lucci, Anthony
2014-01-01
A major obstacle in developing effective therapies against solid tumors stems from an inability to adequately model the rare subpopulation of panresistant cancer cells that may often drive the disease. We describe a strategy for optimally modeling highly abnormal and highly adaptable human triple-negative breast cancer cells, and evaluating therapies for their ability to eradicate such cells. To overcome the shortcomings often associated with cell culture models, we incorporated several features in our model including a selection of highly adaptable cancer cells based on their ability to survive a metabolic challenge. We have previously shown that metabolically adaptable cancer cells efficiently metastasize to multiple organs in nude mice. Here we show that the cancer cells modeled in our system feature an embryo-like gene expression and amplification of the fat mass and obesity associated gene FTO. We also provide evidence of upregulation of ZEB1 and downregulation of GRHL2 indicating increased epithelial to mesenchymal transition in metabolically adaptable cancer cells. Our results obtained with a variety of anticancer agents support the validity of the model of realistic panresistance and suggest that it could be used for developing anticancer agents that would overcome panresistance. PMID:25279830
Singh, Balraj; Shamsnia, Anna; Raythatha, Milan R; Milligan, Ryan D; Cady, Amanda M; Madan, Simran; Lucci, Anthony
2014-01-01
A major obstacle in developing effective therapies against solid tumors stems from an inability to adequately model the rare subpopulation of panresistant cancer cells that may often drive the disease. We describe a strategy for optimally modeling highly abnormal and highly adaptable human triple-negative breast cancer cells, and evaluating therapies for their ability to eradicate such cells. To overcome the shortcomings often associated with cell culture models, we incorporated several features in our model including a selection of highly adaptable cancer cells based on their ability to survive a metabolic challenge. We have previously shown that metabolically adaptable cancer cells efficiently metastasize to multiple organs in nude mice. Here we show that the cancer cells modeled in our system feature an embryo-like gene expression and amplification of the fat mass and obesity associated gene FTO. We also provide evidence of upregulation of ZEB1 and downregulation of GRHL2 indicating increased epithelial to mesenchymal transition in metabolically adaptable cancer cells. Our results obtained with a variety of anticancer agents support the validity of the model of realistic panresistance and suggest that it could be used for developing anticancer agents that would overcome panresistance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biagi, C J; Uman, M A
2011-12-13
There are relatively few reports in the literature focusing on negative laboratory leaders. Most of the reports focus exclusively on the simpler positive laboratory leader that is more commonly encountered in high voltage engineering [Gorin et al., 1976; Les Renardieres Group, 1977; Gallimberti, 1979; Domens et al., 1994; Bazelyan and Raizer 1998]. The physics of the long, negative leader and its positive counterpart are similar; the two differ primarily in their extension mechanisms [Bazelyan and Raizer, 1998]. Long negative sparks extend primarily by an intermittent process termed a 'step' that requires the development of secondary leader channels separated in spacemore » from the primary leader channel. Long positive sparks typically extend continuously, although, under proper conditions, their extension can be temporarily halted and begun again, and this is sometimes viewed as a stepping process. However, it is emphasized that the nature of positive leader stepping is not like that of negative leader stepping. There are several key observational studies of the propagation of long, negative-polarity laboratory sparks in air that have aided in the understanding of the stepping mechanisms exhibited by such sparks [e.g., Gorin et al., 1976; Les Renardieres Group, 1981; Ortega et al., 1994; Reess et al., 1995; Bazelyan and Raizer, 1998; Gallimberti et al., 2002]. These reports are reviewed below in Section 2, with emphasis placed on the stepping mechanism (the space stem, pilot, and space leader). Then, in Section 3, reports pertaining to modeling of long negative leaders are summarized.« less
How absent negativity relates to affect and motivation: an integrative relief model
Deutsch, Roland; Smith, Kevin J. M.; Kordts-Freudinger, Robert; Reichardt, Regina
2015-01-01
The present paper concerns the motivational underpinnings and behavioral correlates of the prevention or stopping of negative stimulation – a situation referred to as relief. Relief is of great theoretical and applied interest. Theoretically, it is tied to theories linking affect, emotion, and motivational systems. Importantly, these theories make different predictions regarding the association between relief and motivational systems. Moreover, relief is a prototypical antecedent of counterfactual emotions, which involve specific cognitive processes compared to factual or mere anticipatory emotions. Practically, relief may be an important motivator of addictive and phobic behaviors, self destructive behaviors, and social influence. In the present paper, we will first provide a review of conflicting conceptualizations of relief. We will then present an integrative relief model (IRMO) that aims at resolving existing theoretical conflicts. We then review evidence relevant to distinctive predictions regarding the moderating role of various procedural features of relief situations. We conclude that our integrated model results in a better understanding of existing evidence on the affective and motivational underpinnings of relief, but that further evidence is needed to come to a more comprehensive evaluation of the viability of IRMO. PMID:25806008
How absent negativity relates to affect and motivation: an integrative relief model.
Deutsch, Roland; Smith, Kevin J M; Kordts-Freudinger, Robert; Reichardt, Regina
2015-01-01
The present paper concerns the motivational underpinnings and behavioral correlates of the prevention or stopping of negative stimulation - a situation referred to as relief. Relief is of great theoretical and applied interest. Theoretically, it is tied to theories linking affect, emotion, and motivational systems. Importantly, these theories make different predictions regarding the association between relief and motivational systems. Moreover, relief is a prototypical antecedent of counterfactual emotions, which involve specific cognitive processes compared to factual or mere anticipatory emotions. Practically, relief may be an important motivator of addictive and phobic behaviors, self destructive behaviors, and social influence. In the present paper, we will first provide a review of conflicting conceptualizations of relief. We will then present an integrative relief model (IRMO) that aims at resolving existing theoretical conflicts. We then review evidence relevant to distinctive predictions regarding the moderating role of various procedural features of relief situations. We conclude that our integrated model results in a better understanding of existing evidence on the affective and motivational underpinnings of relief, but that further evidence is needed to come to a more comprehensive evaluation of the viability of IRMO.
Golub, Sarit A; Thompson, Louisa I; Kowalczyk, William J
2016-01-01
We investigated the relationship between emotional distress and decision making in sexual risk and substance use behavior among 174 (ages 25 to 50 years, 53% black) men who have sex with men (MSM), a population at increased risk for HIV. The sample was stratified by HIV status. Measures of affective decision making, depression, anxiety, sex acts, and substance use during the past 60 days were collected at our research center. Negative binomial regression models were used to examine the relationship between age, HIV status, anxiety, depression, and IGT performance in the prediction of number of risky sex acts and substance use days. Among those without anxiety or depression, both number of risky sex acts and drug use days decreased with better performance during risky trials (i.e., last two blocks) of the IGT. For those with higher rates of anxiety, but not depression, IGT risk trial performance and risky sex acts increased concomitantly. Anxiety also interacted with IGT performance across all trials to predict substance use, such that anxiety was associated with greater substance use among those with better IGT performance. The opposite was true for those with depression, but only during risk trials. HIV-positive participants reported fewer substance use days than HIV-negative participants, but there was no difference in association between behavior and IGT performance by HIV status. Our findings suggest that anxiety may exacerbate risk-taking behavior when affective decision-making ability is intact. The relationship between affective decision making and risk taking may be sensitive to different profiles of emotional distress, as well as behavioral context. Investigations of affective decision making in sexual risk taking and substance use should examine different distress profiles separately, with implications for HIV prevention efforts.
Golub, Sarit A.; Thompson, Louisa I.; Kowalczyk, William J.
2016-01-01
We investigated the relationship between emotional distress and decision-making in sexual risk and substance use behavior among 174 (ages 25 to 50, 53% black) men who have sex with men (MSM), a population at increased risk for HIV. The sample was stratified by HIV status. Measures of affective decision-making (Iowa Gambling Task, IGT, Bechara et al., 1994), depression, anxiety, sex acts, and substance use during the past 60 days were collected at our research center. Negative binomial regression models were used to examine the relationship between age, HIV status, anxiety, depression, and IGT performance in the prediction of number of risky sex acts and substance use days. Among those without anxiety or depression, both number of risky sex acts and drug use days decreased with better performance during risky trials (i.e., last two blocks) of the IGT. For those with higher rates of anxiety, but not depression, IGT risk trial performance and risky sex acts increased concomitantly. Anxiety also interacted with IGT performance across all trials to predict substance use, such that anxiety was associated with greater substance use among those with better IGT performance. The opposite was true for those with depression, but only during risk trials. HIV-positive participants reported fewer substance use days than HIV-negative participants, but there was no difference in association between behavior and IGT performance by HIV status. Our findings suggest that anxiety may exacerbate risk-taking behavior when affective decision-making ability is intact. The relationship between affective decision-making and risk taking may be sensitive to different profiles of emotional distress, as well as behavioral context. Investigations of affective decision-making in sexual risk taking and substance use should examine different distress profiles separately, with implications for HIV prevention efforts. PMID:26745769
Dental Caries and Enamel Defects in Very Low Birth Weight Adolescents
Nelson, S.; Albert, J.M.; Lombardi, G.; Wishnek, S.; Asaad, G.; Kirchner, H.L.; Singer, L.T.
2011-01-01
Objectives The purpose of this study was to examine developmental enamel defects and dental caries in very low birth weight adolescents with high risk (HR-VLBW) and low risk (LR-VLBW) compared to full-term (term) adolescents. Methods The sample consisted of 224 subjects (80 HR-VLBW, 59 LR-VLBW, 85 term adolescents) recruited from an ongoing longitudinal study. Sociodemographic and medical information was available from birth. Dental examination of the adolescent at the 14-year visit included: enamel defects (opacity and hypoplasia); decayed, missing, filled teeth of incisors and molars (DMFT-IM) and of overall permanent teeth (DMFT); Simplified Oral Hygiene Index for debris/calculus on teeth, and sealant presence. A caregiver questionnaire completed simultaneously assessed dental behavior, access, insurance status and prevention factors. Hierarchical analysis utilized the zero-inflated negative binomial model and zero-inflated Poisson model. Results The zero-inflated negative binomial model controlling for sociodemographic variables indicated that the LR-VLBW group had an estimated 75% increase (p < 0.05) in number of demarcated opacities in the incisors and first molar teeth compared to the term group. Hierarchical modeling indicated that demarcated opacities were a significant predictor of DMFT-IM after control for relevant covariates. The term adolescents had significantly increased DMFT-IM and DMFT scores compared to the LR-VLBW adolescents. Conclusion LR-VLBW was a significant risk factor for increased enamel defects in the permanent incisors and first molars. Term children had increased caries compared to the LR-VLBW group. The effect of birth group and enamel defects on caries has to be investigated longitudinally from birth. PMID:20975268
Heidar, Z; Bakhtiyari, M; Mirzamoradi, M; Zadehmodarres, S; Sarfjoo, F S; Mansournia, M A
2015-09-01
The purpose of this study was to predict the poor and excessive ovarian response using anti-Müllerian hormone (AMH) levels following a long agonist protocol in IVF candidates. Through a prospective cohort study, the type of relationship and appropriate scale for AMH were determined using the fractional polynomial regression. To determine the effect of AMH on the outcomes of ovarian stimulation and different ovarian responses, the multi-nominal and negative binomial regression models were fitted using backward stepwise method. The ovarian response of study subject who entered a standard long-term treatment cycle with GnRH agonist was evaluated using prediction model, separately and in combined models with (ROC) curves. The use of standard long-term treatments with GnRH agonist led to positive pregnancy test results in 30% of treated patients. With each unit increase in the log of AMH, the odds ratio of having poor response compared to normal response decreases by 64% (OR 0.36, 95% CI 0.19-0.68). Also the results of negative binomial regression model indicated that for one unit increase in the log of AMH blood levels, the odds of releasing an oocyte increased 24% (OR 1.24, 95% CI 1.14-1.35). The optimal cut-off points of AMH for predicting excessive and poor ovarian responses were 3.4 and 1.2 ng/ml, respectively, with area under curves of 0.69 (0.60-0.77) and 0.76 (0.66-0.86), respectively. By considering the age of the patient undergoing infertility treatment as a variable affecting ovulation, use of AMH levels showed to be a good test to discriminate between different ovarian responses.
Clinical and MRI activity as determinants of sample size for pediatric multiple sclerosis trials
Verhey, Leonard H.; Signori, Alessio; Arnold, Douglas L.; Bar-Or, Amit; Sadovnick, A. Dessa; Marrie, Ruth Ann; Banwell, Brenda
2013-01-01
Objective: To estimate sample sizes for pediatric multiple sclerosis (MS) trials using new T2 lesion count, annualized relapse rate (ARR), and time to first relapse (TTFR) endpoints. Methods: Poisson and negative binomial models were fit to new T2 lesion and relapse count data, and negative binomial time-to-event and exponential models were fit to TTFR data of 42 children with MS enrolled in a national prospective cohort study. Simulations were performed by resampling from the best-fitting model of new T2 lesion count, number of relapses, or TTFR, under various assumptions of the effect size, trial duration, and model parameters. Results: Assuming a 50% reduction in new T2 lesions over 6 months, 90 patients/arm are required, whereas 165 patients/arm are required for a 40% treatment effect. Sample sizes for 2-year trials using relapse-related endpoints are lower than that for 1-year trials. For 2-year trials and a conservative assumption of overdispersion (ϑ), sample sizes range from 70 patients/arm (using ARR) to 105 patients/arm (TTFR) for a 50% reduction in relapses, and 230 patients/arm (ARR) to 365 patients/arm (TTFR) for a 30% relapse reduction. Assuming a less conservative ϑ, 2-year trials using ARR require 45 patients/arm (60 patients/arm for TTFR) for a 50% reduction in relapses and 145 patients/arm (200 patients/arm for TTFR) for a 30% reduction. Conclusion: Six-month phase II trials using new T2 lesion count as an endpoint are feasible in the pediatric MS population; however, trials powered on ARR or TTFR will need to be 2 years in duration and will require multicentered collaboration. PMID:23966255
Neuroticism, coping strategies, and negative well-being among caregivers.
Patrick, J H; Hayden, J M
1999-06-01
Neuroticism was incorporated into a model for predicting the well-being of family caregivers. Using data from 596 women with an adult child with a chronic disability, the model hypothesizes direct effects of neuroticism on a caregiver's perceptions of the stressor, on her wishful-escapism and problem-focused coping, and on psychological well-being. Results indicate that neuroticism exerts direct and indirect effects on negative well-being. Results also indicate that stressors have direct effects on both wishful-escapism coping and problem-focused coping. Burden had direct effects on negative psychological well-being. Diagnosis influences the model by having direct effects on stressors and wishful-escapism coping but not on problem-focused coping or burden. Inclusion of individual level variables, such as neuroticism, results in a substantial amount of explained variance in negative well-being.
Negative obstacle detection by thermal signature
NASA Technical Reports Server (NTRS)
Matthies, Larry; Rankin, A.
2003-01-01
Detecting negative obstacles (ditches, potholes, and other depressions) is one of the most difficult problems in perception for autonomous, off-road navigation. Past work has largely relied on range imagery, because that is based on the geometry of the obstacle, is largely insensitive to illumination variables, and because there have not been other reliable alternatives. However, the visible aspect of negative obstacles shrinks rapidly with range, making them impossible to detect in time to avoid them at high speed. To relive this problem, we show that the interiors of negative obstacles generally remain warmer than the surrounding terrain throughout the night, making thermal signature a stable property for night-time negative obstacle detection. Experimental results to date have achieved detection distances 45% greater by using thermal signature than by using range data alone. Thermal signature is the first known observable with potential to reveal a deep negative obstacle without actually seeing far into it. Modeling solar illumination has potential to extend the usefulness of thermal signature through daylight hours.
Bayesian hierarchical modeling for detecting safety signals in clinical trials.
Xia, H Amy; Ma, Haijun; Carlin, Bradley P
2011-09-01
Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.
Mezulis, Amy H; Hyde, Janet Shibley; Abramson, Lyn Y
2006-11-01
Cognitive models of depression have been well supported with adults, but the developmental origins of cognitive vulnerability are not well understood. The authors hypothesized that temperament, parenting, and negative life events in childhood would contribute to the development of cognitive style, with withdrawal negativity and negative parental feedback moderating the effects of negative life events to predict more depressogenic cognitive styles. These constructs were assessed in 289 children and their parents followed longitudinally from infancy to 5th grade; a subsample (n = 120) also participated in a behavioral task in which maternal feedback to child failure was observed. Results indicated that greater withdrawal negativity in interaction with negative life events was associated with more negative cognitive styles. Self-reported maternal anger expression and observed negative maternal feedback to child's failure significantly interacted with child's negative events to predict greater cognitive vulnerability. There was little evidence of paternal parenting predicting child negative cognitive style.
Silver enhances antibiotic activity against gram-negative bacteria.
Morones-Ramirez, J Ruben; Winkler, Jonathan A; Spina, Catherine S; Collins, James J
2013-06-19
A declining pipeline of clinically useful antibiotics has made it imperative to develop more effective antimicrobial therapies, particularly against difficult-to-treat Gram-negative pathogens. Silver has been used as an antimicrobial since antiquity, yet its mechanism of action remains unclear. We show that silver disrupts multiple bacterial cellular processes, including disulfide bond formation, metabolism, and iron homeostasis. These changes lead to increased production of reactive oxygen species and increased membrane permeability of Gram-negative bacteria that can potentiate the activity of a broad range of antibiotics against Gram-negative bacteria in different metabolic states, as well as restore antibiotic susceptibility to a resistant bacterial strain. We show both in vitro and in a mouse model of urinary tract infection that the ability of silver to induce oxidative stress can be harnessed to potentiate antibiotic activity. Additionally, we demonstrate in vitro and in two different mouse models of peritonitis that silver sensitizes Gram-negative bacteria to the Gram-positive-specific antibiotic vancomycin, thereby expanding the antibacterial spectrum of this drug. Finally, we used silver and antibiotic combinations in vitro to eradicate bacterial persister cells, and show both in vitro and in a mouse biofilm infection model that silver can enhance antibacterial action against bacteria that produce biofilms. This work shows that silver can be used to enhance the action of existing antibiotics against Gram-negative bacteria, thus strengthening the antibiotic arsenal for fighting bacterial infections.
Nikčević, Ana V; Alma, Leyla; Marino, Claudia; Kolubinski, Daniel; Yılmaz-Samancı, Adviye Esin; Caselli, Gabriele; Spada, Marcantonio M
2017-11-01
Both positive smoking outcome expectancies and metacognitions about smoking have been found to be positively associated with cigarette use and nicotine dependence. The goal of this study was to test a model including nicotine dependence and number of daily cigarettes as dependent variables, anxiety and depression as independent variables, and smoking outcome expectancies and metacognitions about smoking as mediators between the independents and dependents. The sample consisted of 524 self-declared smokers who scored 3 or above on the Fagerstrom Test for Nicotine Dependence (FTND: Uysal et al., 2004). Anxiety was not associated with either cigarette use or nicotine dependence but was positively associated with all mediators with the exception of stimulation state enhancement and social facilitation. Depression, on the other hand, was found to be positively associated with nicotine dependence (and very weakly to cigarette use) but was not associated with either smoking outcome expectancies or metacognitions about smoking. Only one smoking outcome expectancy (negative affect reduction) was found to be positively associated with nicotine dependence but not cigarette use. Furthermore one smoking outcome expectancy (negative social impression) was found to be positively associated with cigarette use (but not to nicotine dependence). All metacognitions about smoking were found to be positively associated with nicotine dependence. Moreover, negative metacognitions about uncontrollability were found to be positively associated with cigarette use. Metacognitions about smoking appear to be a stronger mediator than smoking outcome expectancies in the relationship between negative affect and cigarette use/nicotine dependence. The implications of these findings are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ecological covariates based predictive model of malaria risk in the state of Chhattisgarh, India.
Kumar, Rajesh; Dash, Chinmaya; Rani, Khushbu
2017-09-01
Malaria being an endemic disease in the state of Chhattisgarh and ecologically dependent mosquito-borne disease, the study is intended to identify the ecological covariates of malaria risk in districts of the state and to build a suitable predictive model based on those predictors which could assist developing a weather based early warning system. This secondary data based analysis used one month lagged district level malaria positive cases as response variable and ecological covariates as independent variables which were tested with fixed effect panelled negative binomial regression models. Interactions among the covariates were explored using two way factorial interaction in the model. Although malaria risk in the state possesses perennial characteristics, higher parasitic incidence was observed during the rainy and winter seasons. The univariate analysis indicated that the malaria incidence risk was statistically significant associated with rainfall, maximum humidity, minimum temperature, wind speed, and forest cover ( p < 0.05). The efficient predictive model include the forest cover [IRR-1.033 (1.024-1.042)], maximum humidity [IRR-1.016 (1.013-1.018)], and two-way factorial interactions between district specific averaged monthly minimum temperature and monthly minimum temperature, monthly minimum temperature was statistically significant [IRR-1.44 (1.231-1.695)] whereas the interaction term has a protective effect [IRR-0.982 (0.974-0.990)] against malaria infections. Forest cover, maximum humidity, minimum temperature and wind speed emerged as potential covariates to be used in predictive models for modelling the malaria risk in the state which could be efficiently used for early warning systems in the state.
Explaining negative refraction without negative refractive indices.
Talalai, Gregory A; Garner, Timothy J; Weiss, Steven J
2018-03-01
Negative refraction through a triangular prism may be explained without assigning a negative refractive index to the prism by using array theory. For the case of a beam incident upon the wedge, the array theory accurately predicts the beam transmission angle through the prism and provides an estimate of the frequency interval at which negative refraction occurs. The hypotenuse of the prism has a staircase shape because it is built of cubic unit cells. The large phase delay imparted by each unit cell, combined with the staircase shape of the hypotenuse, creates the necessary conditions for negative refraction. Full-wave simulations using the finite-difference time-domain method show that array theory accurately predicts the beam transmission angle.
Experimental model of atelectasis in newborn piglets.
Comaru, Talitha; Fiori, Humberto Holmer; Fiori, Renato Machado; Padoim, Priscila; Stivanin, Jaqueline Basso; da Silva, Vinicius Duval
2014-01-01
There are few studies using animal models in chest physical therapy. However, there are no models to assess these effects in newborns. This study aimed to develop a model of obstructive atelectasis induced by artificial mucus injection in the lungs of newborn piglets, for the study of neonatal physiotherapy. Thirteen newborn piglets received artificial mucus injection via the endotracheal tube. X-rays and blood gas analysis confirmed the atelectasis. The model showed consistent results between oxygenation parameters and radiological findings. Ten (76.9%) of the 13 piglets responded to the intervention. This did not significantly differ from the expected percentage of 50% by the binomial test (95% CI 46.2-95%, P = .09). Our model of atelectasis in newborn piglets is both feasible and appropriate to evaluate the impact of physical therapies on atelectasis in newborns.