Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
Wang, Wei; Griswold, Michael E
2016-11-30
The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Local dependence in random graph models: characterization, properties and statistical inference
Schweinberger, Michael; Handcock, Mark S.
2015-01-01
Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142
A spatial error model with continuous random effects and an application to growth convergence
NASA Astrophysics Data System (ADS)
Laurini, Márcio Poletti
2017-10-01
We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.
Mixed models, linear dependency, and identification in age-period-cohort models.
O'Brien, Robert M
2017-07-20
This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A dynamic spatio-temporal model for spatial data
Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin; Walsh, Daniel P.
2017-01-01
Analyzing spatial data often requires modeling dependencies created by a dynamic spatio-temporal data generating process. In many applications, a generalized linear mixed model (GLMM) is used with a random effect to account for spatial dependence and to provide optimal spatial predictions. Location-specific covariates are often included as fixed effects in a GLMM and may be collinear with the spatial random effect, which can negatively affect inference. We propose a dynamic approach to account for spatial dependence that incorporates scientific knowledge of the spatio-temporal data generating process. Our approach relies on a dynamic spatio-temporal model that explicitly incorporates location-specific covariates. We illustrate our approach with a spatially varying ecological diffusion model implemented using a computationally efficient homogenization technique. We apply our model to understand individual-level and location-specific risk factors associated with chronic wasting disease in white-tailed deer from Wisconsin, USA and estimate the location the disease was first introduced. We compare our approach to several existing methods that are commonly used in spatial statistics. Our spatio-temporal approach resulted in a higher predictive accuracy when compared to methods based on optimal spatial prediction, obviated confounding among the spatially indexed covariates and the spatial random effect, and provided additional information that will be important for containing disease outbreaks.
ERIC Educational Resources Information Center
Angeli, Charoula; Valanides, Nicos; Polemitou, Eirini; Fraggoulidou, Elena
2014-01-01
The study examined the interaction between field dependence-independence (FD/I) and learning with modeling software and simulations, and their effect on children's performance. Participants were randomly assigned into two groups. Group A first learned with a modeling tool and then with simulations. Group B learned first with simulations and then…
Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel
2011-05-23
Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.
Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229
Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.
Effective Perron-Frobenius eigenvalue for a correlated random map
NASA Astrophysics Data System (ADS)
Pool, Roman R.; Cáceres, Manuel O.
2010-09-01
We investigate the evolution of random positive linear maps with various type of disorder by analytic perturbation and direct simulation. Our theoretical result indicates that the statistics of a random linear map can be successfully described for long time by the mean-value vector state. The growth rate can be characterized by an effective Perron-Frobenius eigenvalue that strongly depends on the type of correlation between the elements of the projection matrix. We apply this approach to an age-structured population dynamics model. We show that the asymptotic mean-value vector state characterizes the population growth rate when the age-structured model has random vital parameters. In this case our approach reveals the nontrivial dependence of the effective growth rate with cross correlations. The problem was reduced to the calculation of the smallest positive root of a secular polynomial, which can be obtained by perturbations in terms of Green’s function diagrammatic technique built with noncommutative cumulants for arbitrary n -point correlations.
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
2011-01-01
Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357
Molas, Marek; Lesaffre, Emmanuel
2008-12-30
Discrete bounded outcome scores (BOS), i.e. discrete measurements that are restricted on a finite interval, often occur in practice. Examples are compliance measures, quality of life measures, etc. In this paper we examine three related random effects approaches to analyze longitudinal studies with a BOS as response: (1) a linear mixed effects (LM) model applied to a logistic transformed modified BOS; (2) a model assuming that the discrete BOS is a coarsened version of a latent random variable, which after a logistic-normal transformation, satisfies an LM model; and (3) a random effects probit model. We consider also the extension whereby the variability of the BOS is allowed to depend on covariates. The methods are contrasted using a simulation study and on a longitudinal project, which documents stroke rehabilitation in four European countries using measures of motor and functional recovery. Copyright 2008 John Wiley & Sons, Ltd.
Shteingart, Hanan; Loewenstein, Yonatan
2016-01-01
There is a long history of experiments in which participants are instructed to generate a long sequence of binary random numbers. The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper, we used generalized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, we used logistic regression analysis in order to characterize the temporal sequence of participants' choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seems irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect are a monotonous decreasing function of the delay, yet these individual sequential effects are largely averaged out in a population analysis because of heterogeneity. The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation. Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in the random sequence generation task, different participants adopt different cognitive strategies to suppress sequential dependencies when generating the "random" sequences.
MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.
Hedeker, D; Gibbons, R D
1996-05-01
MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.
Deterministic diffusion in flower-shaped billiards.
Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre
2002-08-01
We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.
Modeling for Ultrasonic Health Monitoring of Foams with Embedded Sensors
NASA Technical Reports Server (NTRS)
Wang, L.; Rokhlin, S. I.; Rokhlin, Stanislav, I.
2005-01-01
In this report analytical and numerical methods are proposed to estimate the effective elastic properties of regular and random open-cell foams. The methods are based on the principle of minimum energy and on structural beam models. The analytical solutions are obtained using symbolic processing software. The microstructure of the random foam is simulated using Voronoi tessellation together with a rate-dependent random close-packing algorithm. The statistics of the geometrical properties of random foams corresponding to different packing fractions have been studied. The effects of the packing fraction on elastic properties of the foams have been investigated by decomposing the compliance into bending and axial compliance components. It is shown that the bending compliance increases and the axial compliance decreases when the packing fraction increases. Keywords: Foam; Elastic properties; Finite element; Randomness
MIXOR: a computer program for mixed-effects ordinal regression analysis.
Hedeker, D; Gibbons, R D
1996-03-01
MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.
Savaşan, Ayşegül; Çam, Olcay
2017-06-01
People with alcohol dependency have lower self-esteem than controls and when their alcohol use increases, their self-esteem decreases. Coping skills in alcohol related issues are predicted to reduce vulnerability to relapse. It is important to adapt care to individual needs so as to prevent a return to the cycle of alcohol use. The Tidal Model focuses on providing support and services to people who need to live a constructive life. The aim of the randomized study was to determine the effect of the psychiatric nursing approach based on the Tidal Model on coping and self-esteem in people with alcohol dependency. The study was semi-experimental in design with a control group, and was conducted on 36 individuals (18 experimental, 18 control). An experimental and a control group were formed by assigning persons to each group using the stratified randomization technique in the order in which they were admitted to hospital. The Coping Inventory (COPE) and the Coopersmith Self-Esteem Inventory (CSEI) were used as measurement instruments. The measurement instruments were applied before the application and three months after the application. In addition to routine treatment and follow-up, the psychiatric nursing approach based on the Tidal Model was applied to the experimental group in the One-to-One Sessions. The psychiatric nursing approach based on the Tidal Model is an approach which is effective in increasing the scores of people with alcohol dependency in positive reinterpretation and growth, active coping, restraint, emotional social support and planning and reducing their scores in behavioral disengagement. It was seen that self-esteem rose, but the difference from the control group did not reach significance. The psychiatric nursing approach based on the Tidal Model has an effect on people with alcohol dependency in maintaining their abstinence. The results of the study may provide practices on a theoretical basis for improving coping behaviors and self-esteem and facilitating the recovery process of alcohol dependents with implications for mental health nursing. Copyright © 2017 Elsevier Inc. All rights reserved.
Hierarchical Bayesian spatial models for alcohol availability, drug "hot spots" and violent crime.
Zhu, Li; Gorman, Dennis M; Horel, Scott
2006-12-07
Ecologic studies have shown a relationship between alcohol outlet densities, illicit drug use and violence. The present study examined this relationship in the City of Houston, Texas, using a sample of 439 census tracts. Neighborhood sociostructural covariates, alcohol outlet density, drug crime density and violent crime data were collected for the year 2000, and analyzed using hierarchical Bayesian models. Model selection was accomplished by applying the Deviance Information Criterion. The counts of violent crime in each census tract were modelled as having a conditional Poisson distribution. Four neighbourhood explanatory variables were identified using principal component analysis. The best fitted model was selected as the one considering both unstructured and spatial dependence random effects. The results showed that drug-law violation explained a greater amount of variance in violent crime rates than alcohol outlet densities. The relative risk for drug-law violation was 2.49 and that for alcohol outlet density was 1.16. Of the neighbourhood sociostructural covariates, males of age 15 to 24 showed an effect on violence, with a 16% decrease in relative risk for each increase the size of its standard deviation. Both unstructured heterogeneity random effect and spatial dependence need to be included in the model. The analysis presented suggests that activity around illicit drug markets is more strongly associated with violent crime than is alcohol outlet density. Unique among the ecological studies in this field, the present study not only shows the direction and magnitude of impact of neighbourhood sociostructural covariates as well as alcohol and illicit drug activities in a neighbourhood, it also reveals the importance of applying hierarchical Bayesian models in this research field as both spatial dependence and heterogeneity random effects need to be considered simultaneously.
FAST TRACK COMMUNICATION: Polarization diffusion from spacetime uncertainty
NASA Astrophysics Data System (ADS)
Contaldi, Carlo R.; Dowker, Fay; Philpott, Lydia
2010-09-01
A model of Lorentz invariant random fluctuations in photon polarization is presented. The effects are frequency dependent and affect the polarization of photons as they propagate through space. We test for this effect by confronting the model with the latest measurements of polarization of cosmic microwave background photons.
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Using Design-Based Latent Growth Curve Modeling with Cluster-Level Predictor to Address Dependency
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L.
2014-01-01
The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…
Harrison, Xavier A
2015-01-01
Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels), I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed poorly when models contained <5 levels of the random intercept term, especially for estimating variance components, and this effect appeared independent of total sample size. These results suggest that OLRE are a useful tool for modelling overdispersion in Binomial data, but that they do not perform well in all circumstances and researchers should take care to verify the robustness of parameter estimates of OLRE models.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.
Chan, Jennifer S K
2016-05-01
Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling and Predicting the Stress Relaxation of Composites with Short and Randomly Oriented Fibers
Obaid, Numaira; Sain, Mohini
2017-01-01
The addition of short fibers has been experimentally observed to slow the stress relaxation of viscoelastic polymers, producing a change in the relaxation time constant. Our recent study attributed this effect of fibers on stress relaxation behavior to the interfacial shear stress transfer at the fiber-matrix interface. This model explained the effect of fiber addition on stress relaxation without the need to postulate structural changes at the interface. In our previous study, we developed an analytical model for the effect of fully aligned short fibers, and the model predictions were successfully compared to finite element simulations. However, in most industrial applications of short-fiber composites, fibers are not aligned, and hence it is necessary to examine the time dependence of viscoelastic polymers containing randomly oriented short fibers. In this study, we propose an analytical model to predict the stress relaxation behavior of short-fiber composites where the fibers are randomly oriented. The model predictions were compared to results obtained from Monte Carlo finite element simulations, and good agreement between the two was observed. The analytical model provides an excellent tool to accurately predict the stress relaxation behavior of randomly oriented short-fiber composites. PMID:29053601
Tsui, Judith I.; Herman, Debra S.; Kettavong, Malyna; Anderson, Bradley J.; Stein, Michael D.
2011-01-01
Pain is common among opioid dependent patients, yet pharmacologic strategies are limited. The aim of this study was to explore whether escitalopram, a selective serotonin reuptake inhibitor, was associated with reductions in pain. The study used longitudinal data from a randomized, controlled trial that evaluated the effects of escitalopram on treatment retention in patients with depressive symptoms who were initiating buprenorphine/naloxone for treatment of opioid dependence. Participants were randomized to take escitalopram 10mg or placebo daily. Changes in pain severity, pain interference and depression were assessed at 1, 2 and 3 months visits using the Visual Analog Scale, Brief Pain Inventory and the Beck Depression Inventory II, respectively. Fixed-effects estimator for panel regression models were used to assess the effects of intervention on changes in outcomes over time. Additional models were estimated to explore whether the intervention effect was mediated by within-person changes in depression. In this sample of 147 adults, we found that participants randomized to escitalopram had significantly larger reductions on both pain severity (b = −14.34, t = −2.66, p < .01) and pain interference (b = −1.20, t = −2.23, p < .05) between baseline and follow-up. After adjusting for within-subject changes in depression, the estimated effects of escitalopram on pain severity and pain interference were virtually identical to the unadjusted effects. In summary, this study of opioid-dependent patients with depressive symptoms found that treatment with escitalopram was associated with clinically meaningful reductions in pain severity and pain interference during the first three months of therapy. PMID:21924552
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
NASA Astrophysics Data System (ADS)
Radgolchin, Moeen; Moeenfard, Hamid
2018-02-01
The construction of self-powered micro-electro-mechanical units by converting the mechanical energy of the systems into electrical power has attracted much attention in recent years. While power harvesting from deterministic external excitations is state of the art, it has been much more difficult to derive mathematical models for scavenging electrical energy from ambient random vibrations, due to the stochastic nature of the excitations. The current research concerns analytical modeling of micro-bridge energy harvesters based on random vibration theory. Since classical elasticity fails to accurately predict the mechanical behavior of micro-structures, strain gradient theory is employed as a powerful tool to increase the accuracy of the random vibration modeling of the micro-harvester. Equations of motion of the system in the time domain are derived using the Lagrange approach. These are then utilized to determine the frequency and impulse responses of the structure. Assuming the energy harvester to be subjected to a combination of broadband and limited-band random support motion and transverse loading, closed-form expressions for mean, mean square, correlation and spectral density of the output power are derived. The suggested formulation is further exploited to investigate the effect of the different design parameters, including the geometric properties of the structure as well as the properties of the electrical circuit on the resulting power. Furthermore, the effect of length scale parameters on the harvested energy is investigated in detail. It is observed that the predictions of classical and even simple size-dependent theories (such as couple stress) appreciably differ from the findings of strain gradient theory on the basis of random vibration. This study presents a first-time modeling of micro-scale harvesters under stochastic excitations using a size-dependent approach and can be considered as a reliable foundation for future research in the field of micro/nano harvesters subjected to non-deterministic loads.
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1983-01-01
Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Application of random effects to the study of resource selection by animals
Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.
2006-01-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
Application of random effects to the study of resource selection by animals.
Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L
2006-07-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
2012-01-01
Background Time-course gene expression data such as yeast cell cycle data may be periodically expressed. To cluster such data, currently used Fourier series approximations of periodic gene expressions have been found not to be sufficiently adequate to model the complexity of the time-course data, partly due to their ignoring the dependence between the expression measurements over time and the correlation among gene expression profiles. We further investigate the advantages and limitations of available models in the literature and propose a new mixture model with autoregressive random effects of the first order for the clustering of time-course gene-expression profiles. Some simulations and real examples are given to demonstrate the usefulness of the proposed models. Results We illustrate the applicability of our new model using synthetic and real time-course datasets. We show that our model outperforms existing models to provide more reliable and robust clustering of time-course data. Our model provides superior results when genetic profiles are correlated. It also gives comparable results when the correlation between the gene profiles is weak. In the applications to real time-course data, relevant clusters of coregulated genes are obtained, which are supported by gene-function annotation databases. Conclusions Our new model under our extension of the EMMIX-WIRE procedure is more reliable and robust for clustering time-course data because it adopts a random effects model that allows for the correlation among observations at different time points. It postulates gene-specific random effects with an autocorrelation variance structure that models coregulation within the clusters. The developed R package is flexible in its specification of the random effects through user-input parameters that enables improved modelling and consequent clustering of time-course data. PMID:23151154
Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.
2015-01-01
Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447
Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M
2015-07-01
Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
NASA Astrophysics Data System (ADS)
Posnansky, Oleg P.
2018-05-01
The measuring of dynamic magnetic susceptibility by nuclear magnetic resonance is used for revealing information about the internal structure of various magnetoactive composites. The response of such material on the applied external static and time-varying magnetic fields encodes intrinsic dynamic correlations and depends on links between macroscopic effective susceptibility and structure on the microscopic scale. In the current work we carried out computational analysis of the frequency dependent dynamic magnetic susceptibility and demonstrated its dependence on the microscopic architectural elements while also considering Euclidean dimensionality. The proposed numerical method is efficient in the simulation of nuclear magnetic resonance experiments in two- and three-dimensional random magnetic media by choosing and modeling the influence of the concentration of components and internal hierarchical characteristics of physical parameters.
NASA Astrophysics Data System (ADS)
Han, Tongcheng
2018-07-01
Understanding the electrical properties of rocks under varying pressure is important for a variety of geophysical applications. This study proposes an approach to modelling the pressure-dependent electrical properties of porous rocks based on an effective medium model. The so-named Textural model uses the aspect ratios and pressure-dependent volume fractions of the pores and the aspect ratio and electrical conductivity of the matrix grains. The pores were represented by randomly oriented stiff and compliant spheroidal shapes with constant aspect ratios, and their pressure-dependent volume fractions were inverted from the measured variation of total porosity with differential pressure using a dual porosity model. The unknown constant stiff and compliant pore aspect ratios and the aspect ratio and electrical conductivity of the matrix grains were inverted by best fitting the modelled electrical formation factor to the measured data. Application of the approach to three sandstone samples covering a broad porosity range showed that the pressure-dependent electrical properties can be satisfactorily modelled by the proposed approach. The results demonstrate that the dual porosity concept is sufficient to explain the electrical properties of porous rocks under pressure through the effective medium model scheme.
Hormone-Mediated Pattern Formation in Seedling of Plants: a Competitive Growth Dynamics Model
NASA Astrophysics Data System (ADS)
Kawaguchi, Satoshi; Mimura, Masayasu; Ohya, Tomoyuki; Oikawa, Noriko; Okabe, Hirotaka; Kai, Shoichi
2001-10-01
An ecologically relevant pattern formation process mediated by hormonal interactions among growing seedlings is modeled based on the experimental observations on the effects of indole acetic acid, which can act as an inhibitor and activator of root growth depending on its concentration. In the absence of any lateral root with constant hormone-sensitivity, the edge effect phenomenon is obtained depending on the secretion rate of hormone from the main root. Introduction of growth-stage-dependent hormone-sensitivity drastically amplifies the initial randomness, resulting in spatially irregular macroscopic patterns. When the lateral root growth is introduced, periodic patterns are obtained whose periodicity depends on the length of lateral roots. The growth-stage-dependent hormone-sensitivity and the lateral root growth are crucial for macroscopic periodic-pattern formation.
NASA Astrophysics Data System (ADS)
Rana, Dipankar; Gangopadhyay, Gautam
2003-01-01
We have analyzed the energy transfer process in a dendrimer supermolecule using a classical random walk model and an Eyring model of membrane permeation. Here the energy transfer is considered as a multiple barrier crossing process by thermal hopping on the backbone of a cayley tree. It is shown that the mean residence time and mean first passage time, which involve explicit local escape rates, depend upon the temperature, size of the molecule, core branching, and the nature of the potential energy landscape along the cayley tree architecture. The effect of branching tries to create a uniform distribution of mean residence time over the generations and the distribution depends upon the interplay of funneling and local rates of transitions. The calculation of flux at the steady state from the Eyring model also gives a useful idea about the rate when the dendrimeric system is considered as an open system where the core is absorbing the transported energy like a photosynthetic reaction center and a continuous supply of external energy is maintained at the peripheral nodes. The effect of the above parameters of the system are shown to depend on the steady-state flux that has a qualitative resemblence with the result of the mean first passage time approach.
NASA Astrophysics Data System (ADS)
Smith, Lyndon N.; Smith, Melvyn L.
2000-10-01
Particulate materials undergo processing in many industries, and therefore there are significant commercial motivators for attaining improvements in the flow and packing behavior of powders. This can be achieved by modeling the effects of particle size, friction, and most importantly, particle shape or morphology. The method presented here for simulating powders employs a random number generator to construct a model of a random particle by combining a sphere with a number of smaller spheres. The resulting 3D model particle has a nodular type of morphology, which is similar to that exhibited by the atomized powders that are used in the bulk of powder metallurgy (PM) manufacture. The irregularity of the model particles is dependent upon vision system data gathered from microscopic analysis of real powder particles. A methodology is proposed whereby randomly generated model particles of various sized and irregularities can be combined in a random packing simulation. The proposed Monte Carlo technique would allow incorporation of the effects of gravity, wall friction, and inter-particle friction. The improvements in simulation realism that this method is expected to provide would prove useful for controlling powder production, and for predicting die fill behavior during the production of PM parts.
Brügemann, K; Gernand, E; von Borstel, U U; König, S
2011-08-01
Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.
ERIC Educational Resources Information Center
Bockenholt, Ulf
1999-01-01
Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…
Hurdle models for multilevel zero-inflated data via h-likelihood.
Molas, Marek; Lesaffre, Emmanuel
2010-12-30
Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.
El-Diasty, Mohammed; Pagiatakis, Spiros
2009-01-01
In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.
Modeling spatial effects of PM{sub 2.5} on term low birth weight in Los Angeles County
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coker, Eric, E-mail: cokerer@onid.orst.edu; Ghosh, Jokay; Jerrett, Michael
Air pollution epidemiological studies suggest that elevated exposure to fine particulate matter (PM{sub 2.5}) is associated with higher prevalence of term low birth weight (TLBW). Previous studies have generally assumed the exposure–response of PM{sub 2.5} on TLBW to be the same throughout a large geographical area. Health effects related to PM{sub 2.5} exposures, however, may not be uniformly distributed spatially, creating a need for studies that explicitly investigate the spatial distribution of the exposure–response relationship between individual-level exposure to PM{sub 2.5} and TLBW. Here, we examine the overall and spatially varying exposure–response relationship between PM{sub 2.5} and TLBW throughout urbanmore » Los Angeles (LA) County, California. We estimated PM{sub 2.5} from a combination of land use regression (LUR), aerosol optical depth from remote sensing, and atmospheric modeling techniques. Exposures were assigned to LA County individual pregnancies identified from electronic birth certificates between the years 1995-2006 (N=1,359,284) provided by the California Department of Public Health. We used a single pollutant multivariate logistic regression model, with multilevel spatially structured and unstructured random effects set in a Bayesian framework to estimate global and spatially varying pollutant effects on TLBW at the census tract level. Overall, increased PM{sub 2.5} level was associated with higher prevalence of TLBW county-wide. The spatial random effects model, however, demonstrated that the exposure–response for PM{sub 2.5} and TLBW was not uniform across urban LA County. Rather, the magnitude and certainty of the exposure–response estimates for PM{sub 2.5} on log odds of TLBW were greatest in the urban core of Central and Southern LA County census tracts. These results suggest that the effects may be spatially patterned, and that simply estimating global pollutant effects obscures disparities suggested by spatial patterns of effects. Studies that incorporate spatial multilevel modeling with random coefficients allow us to identify areas where air pollutant effects on adverse birth outcomes may be most severe and policies to further reduce air pollution might be most effective. - Highlights: • We model the spatial dependency of PM{sub 2.5} effects on term low birth weight (TLBW). • PM{sub 2.5} effects on TLBW are shown to vary spatially across urban LA County. • Modeling spatial dependency of PM{sub 2.5} health effects may identify effect 'hotspots'. • Birth outcomes studies should consider the spatial dependency of PM{sub 2.5} effects.« less
Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.
Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J
2017-10-15
Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Bias and inference from misspecified mixed‐effect models in stepped wedge trial analysis
Fielding, Katherine L.; Davey, Calum; Aiken, Alexander M.; Hargreaves, James R.; Hayes, Richard J.
2017-01-01
Many stepped wedge trials (SWTs) are analysed by using a mixed‐effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common‐to‐all or varied‐between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within‐cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within‐cluster comparisons in the standard model. In the SWTs simulated here, mixed‐effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within‐cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28556355
NASA Astrophysics Data System (ADS)
Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.
2017-07-01
Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.
A BASIC Program for Use in Teaching Population Dynamics.
ERIC Educational Resources Information Center
Kidd, N. A. C.
1984-01-01
Describes an interactive simulation model which can be used to demonstrate population growth with discrete or overlapping populations and the effects of random, constant, or density-dependent mortality. The program listing (for Commodore PET 4032 microcomputer) is included. (Author/DH)
ERIC Educational Resources Information Center
Goldberg, Wendy A.; Prause, JoAnn; Lucas-Thompson, Rachel; Himsel, Amy
2008-01-01
This meta-analysis of 68 studies (770 effect sizes) used random effects models to examine whether children's achievement differed depending on whether their mothers were employed. Four achievement outcomes were emphasized: formal tests of achievement and intellectual functioning, grades, and teacher ratings of cognitive competence. When all…
Resistance distribution in the hopping percolation model.
Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad
2005-07-01
We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.
Additive mixed effect model for recurrent gap time data.
Ding, Jieli; Sun, Liuquan
2017-04-01
Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.
Effective pore-scale dispersion upscaling with a correlated continuous time random walk approach
NASA Astrophysics Data System (ADS)
Le Borgne, T.; Bolster, D.; Dentz, M.; de Anna, P.; Tartakovsky, A.
2011-12-01
We investigate the upscaling of dispersion from a pore-scale analysis of Lagrangian velocities. A key challenge in the upscaling procedure is to relate the temporal evolution of spreading to the pore-scale velocity field properties. We test the hypothesis that one can represent Lagrangian velocities at the pore scale as a Markov process in space. The resulting effective transport model is a continuous time random walk (CTRW) characterized by a correlated random time increment, here denoted as correlated CTRW. We consider a simplified sinusoidal wavy channel model as well as a more complex heterogeneous pore space. For both systems, the predictions of the correlated CTRW model, with parameters defined from the velocity field properties (both distribution and correlation), are found to be in good agreement with results from direct pore-scale simulations over preasymptotic and asymptotic times. In this framework, the nontrivial dependence of dispersion on the pore boundary fluctuations is shown to be related to the competition between distribution and correlation effects. In particular, explicit inclusion of spatial velocity correlation in the effective CTRW model is found to be important to represent incomplete mixing in the pore throats.
A random spatial network model based on elementary postulates
Karlinger, Michael R.; Troutman, Brent M.
1989-01-01
A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.
Wave-induced fluid flow in random porous media: Attenuation and dispersion of elastic waves
NASA Astrophysics Data System (ADS)
Müller, Tobias M.; Gurevich, Boris
2005-05-01
A detailed analysis of the relationship between elastic waves in inhomogeneous, porous media and the effect of wave-induced fluid flow is presented. Based on the results of the poroelastic first-order statistical smoothing approximation applied to Biot's equations of poroelasticity, a model for elastic wave attenuation and dispersion due to wave-induced fluid flow in 3-D randomly inhomogeneous poroelastic media is developed. Attenuation and dispersion depend on linear combinations of the spatial correlations of the fluctuating poroelastic parameters. The observed frequency dependence is typical for a relaxation phenomenon. Further, the analytic properties of attenuation and dispersion are analyzed. It is shown that the low-frequency asymptote of the attenuation coefficient of a plane compressional wave is proportional to the square of frequency. At high frequencies the attenuation coefficient becomes proportional to the square root of frequency. A comparison with the 1-D theory shows that attenuation is of the same order but slightly larger in 3-D random media. Several modeling choices of the approach including the effect of cross correlations between fluid and solid phase properties are demonstrated. The potential application of the results to real porous materials is discussed. .
Geometrical effects on the electron residence time in semiconductor nano-particles.
Koochi, Hakimeh; Ebrahimi, Fatemeh
2014-09-07
We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ(r) in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r(2) model) or through the whole particle (r(3) model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW) simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ(r). It has been observed that by increasing the coordination number n, the average value of electron residence time, τ̅(r) rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ̅(r) is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ̅(r). Our simulations indicate that for volume distribution of traps, τ̅(r) scales as d(2). For a surface distribution of traps τ(r) increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.
Random crystal field effects on the integer and half-integer mixed-spin system
NASA Astrophysics Data System (ADS)
Yigit, Ali; Albayrak, Erhan
2018-05-01
In this work, we have focused on the random crystal field effects on the phase diagrams of the mixed spin-1 and spin-5/2 Ising system obtained by utilizing the exact recursion relations (ERR) on the Bethe lattice (BL). The distribution function P(Di) = pδ [Di - D(1 + α) ] +(1 - p) δ [Di - D(1 - α) ] is used to randomize the crystal field.The phase diagrams are found to exhibit second- and first-order phase transitions depending on the values of α, D and p. It is also observed that the model displays tricritical point, isolated point, critical end point and three compensation temperatures for suitable values of the system parameters.
MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.
Lok, Judith J
2017-04-01
In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.
Physically based reflectance model utilizing polarization measurement.
Nakano, Takayuki; Tamagawa, Yasuhisa
2005-05-20
A surface bidirectional reflectance distribution function (BRDF) depends on both the optical properties of the material and the microstructure of the surface and appears as combination of these factors. We propose a method for modeling the BRDF based on a separate optical-property (refractive-index) estimation by polarization measurement. Because the BRDF and the refractive index for precisely the same place can be determined, errors cased by individual difference or spatial dependence can be eliminated. Our BRDF model treats the surface as an aggregation of microfacets, and the diffractive effect is negligible because of randomness. An example model of a painted aluminum plate is presented.
Network meta-analysis of disconnected networks: How dangerous are random baseline treatment effects?
Béliveau, Audrey; Goring, Sarah; Platt, Robert W; Gustafson, Paul
2017-12-01
In network meta-analysis, the use of fixed baseline treatment effects (a priori independent) in a contrast-based approach is regularly preferred to the use of random baseline treatment effects (a priori dependent). That is because, often, there is not a need to model baseline treatment effects, which carry the risk of model misspecification. However, in disconnected networks, fixed baseline treatment effects do not work (unless extra assumptions are made), as there is not enough information in the data to update the prior distribution on the contrasts between disconnected treatments. In this paper, we investigate to what extent the use of random baseline treatment effects is dangerous in disconnected networks. We take 2 publicly available datasets of connected networks and disconnect them in multiple ways. We then compare the results of treatment comparisons obtained from a Bayesian contrast-based analysis of each disconnected network using random normally distributed and exchangeable baseline treatment effects to those obtained from a Bayesian contrast-based analysis of their initial connected network using fixed baseline treatment effects. For the 2 datasets considered, we found that the use of random baseline treatment effects in disconnected networks was appropriate. Because those datasets were not cherry-picked, there should be other disconnected networks that would benefit from being analyzed using random baseline treatment effects. However, there is also a risk for the normality and exchangeability assumption to be inappropriate in other datasets even though we have not observed this situation in our case study. We provide code, so other datasets can be investigated. Copyright © 2017 John Wiley & Sons, Ltd.
Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel
2016-07-20
A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel
A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less
A Preliminary Investigation of a Randomized Dependent Group Contingency for Hallway Transitions
ERIC Educational Resources Information Center
Deshais, Meghan A.; Fisher, Alyssa B.; Kahng, SungWoo
2018-01-01
We conducted a preliminary investigation of a randomized dependent group contingency to decrease disruptive behavior during hallway transitions. Two first-graders, identified by their classroom teacher, participated in this study. A multiple baseline across transitions was used to evaluate the effects of the randomized dependent group contingency…
Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.
2013-01-01
In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.
Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Yamazaki, Hiroshi
2018-01-01
We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.
Model Checking with Multi-Threaded IC3 Portfolios
2015-01-15
different runs varies randomly depending on the thread interleaving. The use of a portfolio of solvers to maximize the likelihood of a quick solution is...empirically show (cf. Sec. 5.2) that the predictions based on this formula have high accuracy. Note that each solver in the portfolio potentially searches...speedup of over 300. We also show that widening the proof search of ic3 by randomizing its SAT solver is not as effective as paral- lelization
Persistent random walk of cells involving anomalous effects and random death
NASA Astrophysics Data System (ADS)
Fedotov, Sergei; Tan, Abby; Zubarev, Andrey
2015-04-01
The purpose of this paper is to implement a random death process into a persistent random walk model which produces sub-ballistic superdiffusion (Lévy walk). We develop a stochastic two-velocity jump model of cell motility for which the switching rate depends upon the time which the cell has spent moving in one direction. It is assumed that the switching rate is a decreasing function of residence (running) time. This assumption leads to the power law for the velocity switching time distribution. This describes the anomalous persistence of cell motility: the longer the cell moves in one direction, the smaller the switching probability to another direction becomes. We derive master equations for the cell densities with the generalized switching terms involving the tempered fractional material derivatives. We show that the random death of cells has an important implication for the transport process through tempering of the superdiffusive process. In the long-time limit we write stationary master equations in terms of exponentially truncated fractional derivatives in which the rate of death plays the role of tempering of a Lévy jump distribution. We find the upper and lower bounds for the stationary profiles corresponding to the ballistic transport and diffusion with the death-rate-dependent diffusion coefficient. Monte Carlo simulations confirm these bounds.
Noise-induced extinction for a ratio-dependent predator-prey model with strong Allee effect in prey
NASA Astrophysics Data System (ADS)
Mandal, Partha Sarathi
2018-04-01
In this paper, we study a stochastically forced ratio-dependent predator-prey model with strong Allee effect in prey population. In the deterministic case, we show that the model exhibits the stable interior equilibrium point or limit cycle corresponding to the co-existence of both species. We investigate a probabilistic mechanism of the noise-induced extinction in a zone of stable interior equilibrium point. Computational methods based on the stochastic sensitivity function technique are applied for the analysis of the dispersion of random states near stable interior equilibrium point. This method allows to construct a confidence domain and estimate the threshold value of the noise intensity for a transition from the coexistence to the extinction.
Bakbergenuly, Ilyas; Morgenthaler, Stephan
2016-01-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062
Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane
2011-01-01
Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223
Robustness and Vulnerability of Networks with Dynamical Dependency Groups.
Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi
2016-11-28
The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.
Reliable gain-scheduled control of discrete-time systems and its application to CSTR model
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.
2016-10-01
This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Two-dimensional Ising model on random lattices with constant coordination number
NASA Astrophysics Data System (ADS)
Schrauth, Manuel; Richter, Julian A. J.; Portela, Jefferson S. E.
2018-02-01
We study the two-dimensional Ising model on networks with quenched topological (connectivity) disorder. In particular, we construct random lattices of constant coordination number and perform large-scale Monte Carlo simulations in order to obtain critical exponents using finite-size scaling relations. We find disorder-dependent effective critical exponents, similar to diluted models, showing thus no clear universal behavior. Considering the very recent results for the two-dimensional Ising model on proximity graphs and the coordination number correlation analysis suggested by Barghathi and Vojta [Phys. Rev. Lett. 113, 120602 (2014), 10.1103/PhysRevLett.113.120602], our results indicate that the planarity and connectedness of the lattice play an important role on deciding whether the phase transition is stable against quenched topological disorder.
Douglass, Michael; Bezak, Eva; Penfold, Scott
2013-07-01
Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy. Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10(6) particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 μm. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4. The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER ≈ 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP. The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.
Risk perception in epidemic modeling
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Liò, Pietro; Sguanci, Luca
2007-12-01
We investigate the effects of risk perception in a simple model of epidemic spreading. We assume that the perception of the risk of being infected depends on the fraction of neighbors that are ill. The effect of this factor is to decrease the infectivity, that therefore becomes a dynamical component of the model. We study the problem in the mean-field approximation and by numerical simulations for regular, random, and scale-free networks. We show that for homogeneous and random networks, there is always a value of perception that stops the epidemics. In the “worst-case” scenario of a scale-free network with diverging input connectivity, a linear perception cannot stop the epidemics; however, we show that a nonlinear increase of the perception risk may lead to the extinction of the disease. This transition is discontinuous, and is not predicted by the mean-field analysis.
ERIC Educational Resources Information Center
Ates, Bünyamin
2016-01-01
In this research, the effect of solution focused group counseling upon high school students struggling with school burnout was analyzed. The research was an experimental study in which a pre-test post-test control group random design was used, depending upon the real experimental model. The study group included 30 students that volunteered from…
Dynamic Quantum Allocation and Swap-Time Variability in Time-Sharing Operating Systems.
ERIC Educational Resources Information Center
Bhat, U. Narayan; Nance, Richard E.
The effects of dynamic quantum allocation and swap-time variability on central processing unit (CPU) behavior are investigated using a model that allows both quantum length and swap-time to be state-dependent random variables. Effective CPU utilization is defined to be the proportion of a CPU busy period that is devoted to program processing, i.e.…
The Dynamics of Study-Work Choice and Its Effect on Intended and Actual University Attainment
ERIC Educational Resources Information Center
Gong, Xiaodong
2017-01-01
We study the dynamics of study-work choices of Australian high school students and how these choices affect intended and actual enrolment in universities when they finish their school education. A dynamic random effect multi-equation model is constructed and estimated. We find that study-work choices are state dependent, driven by student…
Role of small-norm components in extended random-phase approximation
NASA Astrophysics Data System (ADS)
Tohyama, Mitsuru
2017-09-01
The role of the small-norm amplitudes in extended random-phase approximation (RPA) theories such as the particle-particle and hole-hole components of one-body amplitudes and the two-body amplitudes other than two-particle/two-hole components are investigated for the one-dimensional Hubbard model using an extended RPA derived from the time-dependent density matrix theory. It is found that these amplitudes cannot be neglected in strongly interacting regions where the effects of ground-state correlations are significant.
SHER: a colored petri net based random mobility model for wireless communications.
Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal
2015-01-01
In wireless network research, simulation is the most imperative technique to investigate the network's behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors of our proposed model.
SHER: A Colored Petri Net Based Random Mobility Model for Wireless Communications
Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal
2015-01-01
In wireless network research, simulation is the most imperative technique to investigate the network’s behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors of our proposed model. PMID:26267860
A Random Walk in the Park: An Individual-Based Null Model for Behavioral Thermoregulation.
Vickers, Mathew; Schwarzkopf, Lin
2016-04-01
Behavioral thermoregulators leverage environmental temperature to control their body temperature. Habitat thermal quality therefore dictates the difficulty and necessity of precise thermoregulation, and the quality of behavioral thermoregulation in turn impacts organism fitness via the thermal dependence of performance. Comparing the body temperature of a thermoregulator with a null (non-thermoregulating) model allows us to estimate habitat thermal quality and the effect of behavioral thermoregulation on body temperature. We define a null model for behavioral thermoregulation that is a random walk in a temporally and spatially explicit thermal landscape. Predicted body temperature is also integrated through time, so recent body temperature history, environmental temperature, and movement influence current body temperature; there is no particular reliance on an organism's equilibrium temperature. We develop a metric called thermal benefit that equates body temperature to thermally dependent performance as a proxy for fitness. We measure thermal quality of two distinct tropical habitats as a temporally dynamic distribution that is an ergodic property of many random walks, and we compare it with the thermal benefit of real lizards in both habitats. Our simple model focuses on transient body temperature; as such, using it we observe such subtleties as shifts in the thermoregulatory effort and investment of lizards throughout the day, from thermoregulators to thermoconformers.
Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A
2006-06-01
When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.
The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments
ERIC Educational Resources Information Center
Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.
2008-01-01
Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…
Elephant random walks and their connection to Pólya-type urns
NASA Astrophysics Data System (ADS)
Baur, Erich; Bertoin, Jean
2016-11-01
In this paper, we explain the connection between the elephant random walk (ERW) and an urn model à la Pólya and derive functional limit theorems for the former. The ERW model was introduced in [Phys. Rev. E 70, 045101 (2004), 10.1103/PhysRevE.70.045101] to study memory effects in a highly non-Markovian setting. More specifically, the ERW is a one-dimensional discrete-time random walk with a complete memory of its past. The influence of the memory is measured in terms of a memory parameter p between zero and one. In the past years, a considerable effort has been undertaken to understand the large-scale behavior of the ERW, depending on the choice of p . Here, we use known results on urns to explicitly solve the ERW in all memory regimes. The method works as well for ERWs in higher dimensions and is widely applicable to related models.
Yap, Melvin J; Balota, David A; Cortese, Michael J; Watson, Jason M
2006-12-01
This article evaluates 2 competing models that address the decision-making processes mediating word recognition and lexical decision performance: a hybrid 2-stage model of lexical decision performance and a random-walk model. In 2 experiments, nonword type and word frequency were manipulated across 2 contrasts (pseudohomophone-legal nonword and legal-illegal nonword). When nonwords became more wordlike (i.e., BRNTA vs. BRANT vs. BRANE), response latencies to nonwords were slowed and the word frequency effect increased. More important, distributional analyses revealed that the Nonword Type = Word Frequency interaction was modulated by different components of the response time distribution, depending on the specific nonword contrast. A single-process random-walk model was able to account for this particular set of findings more successfully than the hybrid 2-stage model. (c) 2006 APA, all rights reserved.
Shi, Meng; An, Qian; Ainslie, Kylie E C; Haber, Michael; Orenstein, Walter A
2017-12-08
As annual influenza vaccination is recommended for all U.S. persons aged 6 months or older, it is unethical to conduct randomized clinical trials to estimate influenza vaccine effectiveness (VE). Observational studies are being increasingly used to estimate VE. We developed a probability model for comparing the bias and the precision of VE estimates from two case-control designs: the traditional case-control (TCC) design and the test-negative (TN) design. In both study designs, acute respiratory illness (ARI) patients seeking medical care testing positive for influenza infection are considered cases. In the TN design, ARI patients seeking medical care who test negative serve as controls, while in the TCC design, controls are randomly selected individuals from the community who did not contract an ARI. Our model assigns each study participant a covariate corresponding to the person's health status. The probabilities of vaccination and of contracting influenza and non-influenza ARI depend on health status. Hence, our model allows non-random vaccination and confounding. In addition, the probability of seeking care for ARI may depend on vaccination and health status. We consider two outcomes of interest: symptomatic influenza (SI) and medically-attended influenza (MAI). If vaccination does not affect the probability of non-influenza ARI, then VE estimates from TN studies usually have smaller bias than estimates from TCC studies. We also found that if vaccinated influenza ARI patients are less likely to seek medical care than unvaccinated patients because the vaccine reduces symptoms' severity, then estimates of VE from both types of studies may be severely biased when the outcome of interest is SI. The bias is not present when the outcome of interest is MAI. The TN design produces valid estimates of VE if (a) vaccination does not affect the probabilities of non-influenza ARI and of seeking care against influenza ARI, and (b) the confounding effects resulting from non-random vaccination are similar for influenza and non-influenza ARI. Since the bias of VE estimates depends on the outcome against which the vaccine is supposed to protect, it is important to specify the outcome of interest when evaluating the bias.
NASA Astrophysics Data System (ADS)
Koran, John J., Jr.; Koran, Mary Lou
In a study designed to explore the effects of teacher anxiety and modeling on acquisition of a science teaching skill and concomitant student performance, 69 preservice secondary teachers and 295 eighth grade students were randomly assigned to microteaching sessions. Prior to microteaching, teachers were given an anxiety test, then randomly assigned to one of three treatments; a transcript model, a protocol model, or a control condition. Subsequently both teacher and student performance was assessed using written and behavioral measures. Analysis of variance indicated that subjects in the two modeling treatments significantly exceeded performance of control group subjects on all measures of the dependent variable, with the protocol model being generally superior to the transcript model. The differential effects of the modeling treatments were further reflected in student performance. Regression analysis of aptitude-treatment interactions indicated that teacher anxiety scores interacted significantly with instructional treatments, with high anxiety teachers performing best in the protocol modeling treatment. Again, this interaction was reflected in student performance, where students taught by highly anxious teachers performed significantly better when their teachers had received the protocol model. These results were discussed in terms of teacher concerns and a memory model of the effects of anxiety on performance.
NASA Astrophysics Data System (ADS)
Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.
2016-12-01
Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.
Keshavarz, Yousef; Ghaedi, Sina; Rahimi-Kashani, Mansure
2012-01-01
Background The twelve step program is one of the programs that are administered for overcoming abuse of drugs. In this study, the effectiveness of chemical dependency counseling course was investigated using a hybrid model. Methods In a survey with sample size of 243, participants were selected using stratified random sampling method. A questionnaire was used for collecting data and one sample t-test employed for data analysis. Findings Chemical dependency counseling courses was effective from the point of view of graduates, chiefs of rehabilitation centers, rescuers and their families and ultimately managers of rebirth society, but it was not effective from the point of view of professors and lecturers. The last group evaluated the effectiveness of chemical dependency counseling courses only in performance level. Conclusion It seems that the chemical dependency counseling courses had appropriate effectiveness and led to change in attitudes, increase awareness, knowledge and experience combination and ultimately increased the efficiency of counseling. PMID:24494132
ERIC Educational Resources Information Center
Stamovlasis, Dimitrios; Tsitsipis, Georgios; Papageorgiou, George
2010-01-01
This work uses the concepts and tools of complexity theory to examine the effect of logical thinking and two cognitive styles, such as, the degree of field dependence/independence and the convergent/divergent thinking on students' understanding of the structure of matter. Students were categorized according to the model they adopted for the…
Stochastic climate dynamics: Stochastic parametrizations and their global effects
NASA Astrophysics Data System (ADS)
Ghil, Michael
2010-05-01
A well-known difficulty in modeling the atmosphere and oceans' general circulation is the limited, albeit increasing resolution possible in the numerical solution of the governing partial differential equations. While the mass, energy and momentum of an individual cloud, in the atmosphere, or convection chimney, in the oceans, is negligible, their combined effects over long times are not. Until recently, small, subgrid-scale processes were represented in general circulation models (GCMs) by deterministic "parametrizations." While A. Arakawa and associates had realized over three decades ago the conceptual need for ensembles of clouds in such parametrizations, it is only very recently that truly stochastic parametrizations have been introduced into GCMs and weather prediction models. These parametrizations essentially transform a deterministic autonomous system into a non-autonomous one, subject to random forcing. To study systematically the long-term effects of such a forcing has to rely on theory of random dynamical systems (RDS). This theory allows one to consider the detailed geometric structure of the random attractors associated with nonlinear, stochastically perturbed systems. These attractors extend the concept of strange attractors from autonomous dynamical systems to non-autonomous systems with random forcing. To illustrate the essence of the theory, its concepts and methods, we carry out a high-resolution numerical study of two "toy" models in their respective phase spaces. This study allows one to obtain a good approximation of their global random attractors, as well as of the time-dependent invariant measures supported by these attractors. The first of the two models studied herein is the Arnol'd family of circle maps in the presence of noise. The maps' fine-grained, resonant landscape --- associated with Arnol'd tongues --- is smoothed by the noise, thus permitting a comparison with the observable aspects of the "Devil's staircase" that arises in modeling the El Nino-Southern Oscillation (ENSO). These results are confirmed by studying a "French garden" that is obtained by smoothing a "Devil's quarry." Such a quarry results from coupling two circle maps, and random forcing leads to a smoothed version thereof. We thus suspect that stochastic parametrizations will stabilize the sensitive dependence on parameters that has been noticed in the development of GCMs. This talk represents joint work with Mickael D. Chekroun, D. Kondrashov, Eric Simonnet and I. Zaliapin. Several other talks and posters complement the results presented here and provide further insights into RDS theory and its application to the geosciences.
Massah, Omid; Sohrabi, Faramarz; A'azami, Yousef; Doostian, Younes; Farhoudian, Ali; Daneshmand, Reza
2016-03-01
Emotion plays an important role in adapting to life changes and stressful events. Difficulty regulating emotions is one of the problems drug abusers often face, and teaching these individuals to express and manage their emotions can be effective on improving their difficult circumstances. The present study aimed to determine the effectiveness of the Gross model-based emotion regulation strategies training on anger reduction in drug-dependent individuals. The present study had a quasi-experimental design wherein pretest-posttest evaluations were applied using a control group. The population under study included addicts attending Marivan's methadone maintenance therapy centers in 2012 - 2013. Convenience sampling was used to select 30 substance-dependent individuals undergoing maintenance treatment who were then randomly assigned to the experiment and control groups. The experiment group received its training in eight two-hour sessions. Data were analyzed using analysis of co-variance and paired t-test. There was significant reduction in anger symptoms of drug-dependent individuals after gross model based emotion regulation training (ERT) (P < 0.001). Moreover, the effectiveness of the training on anger was persistent in the follow-up period. Symptoms of anger in drug-dependent individuals of this study were reduced by gross model-based emotion regulation strategies training. Based on the results of this study, we may conclude that the gross model based emotion regulation strategies training can be applied alongside other therapies to treat drug abusers undergoing rehabilitation.
Qi, Helena W; Nakka, Priyanka; Chen, Connie; Radhakrishnan, Mala L
2014-01-01
Macromolecular crowding within the cell can impact both protein folding and binding. Earlier models of cellular crowding focused on the excluded volume, entropic effect of crowding agents, which generally favors compact protein states. Recently, other effects of crowding have been explored, including enthalpically-related crowder-protein interactions and changes in solvation properties. In this work, we explore the effects of macromolecular crowding on the electrostatic desolvation and solvent-screened interaction components of protein-protein binding. Our simple model enables us to focus exclusively on the electrostatic effects of water depletion on protein binding due to crowding, providing us with the ability to systematically analyze and quantify these potentially intuitive effects. We use the barnase-barstar complex as a model system and randomly placed, uncharged spheres within implicit solvent to model crowding in an aqueous environment. On average, we find that the desolvation free energy penalties incurred by partners upon binding are lowered in a crowded environment and solvent-screened interactions are amplified. At a constant crowder density (fraction of total available volume occupied by crowders), this effect generally increases as the radius of model crowders decreases, but the strength and nature of this trend can depend on the water probe radius used to generate the molecular surface in the continuum model. In general, there is huge variation in desolvation penalties as a function of the random crowder positions. Results with explicit model crowders can be qualitatively similar to those using a lowered "effective" solvent dielectric to account for crowding, although the "best" effective dielectric constant will likely depend on multiple system properties. Taken together, this work systematically demonstrates, quantifies, and analyzes qualitative intuition-based insights into the effects of water depletion due to crowding on the electrostatic component of protein binding, and it provides an initial framework for future analyses.
Effective size of density-dependent two-sex populations: the effect of mating systems.
Myhre, A M; Engen, S; SAEther, B-E
2017-08-01
Density dependence in vital rates is a key feature affecting temporal fluctuations of natural populations. This has important implications for the rate of random genetic drift. Mating systems also greatly affect effective population sizes, but knowledge of how mating system and density regulation interact to affect random genetic drift is poor. Using theoretical models and simulations, we compare N e in short-lived, density-dependent animal populations with different mating systems. We study the impact of a fluctuating, density-dependent sex ratio and consider both a stable and a fluctuating environment. We find a negative relationship between annual N e /N and adult population size N due to density dependence, suggesting that loss of genetic variation is reduced at small densities. The magnitude of this decrease was affected by mating system and life history. A male-biased, density-dependent sex ratio reduces the rate of genetic drift compared to an equal, density-independent sex ratio, but a stochastic change towards male bias reduces the N e /N ratio. Environmental stochasticity amplifies temporal fluctuations in population size and is thus vital to consider in estimation of effective population sizes over longer time periods. Our results on the reduced loss of genetic variation at small densities, particularly in polygamous populations, indicate that density regulation may facilitate adaptive evolution at small population sizes. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles
2009-08-15
In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.
Xu, Min; Wu, Tao T; Qu, Jianan Y
2008-01-01
A unified Mie and fractal model for light scattering by biological cells is presented. This model is shown to provide an excellent global agreement with the angular dependent elastic light scattering spectroscopy of cells over the whole visible range (400 to 700 nm) and at all scattering angles (1.1 to 165 deg) investigated. Mie scattering from the bare cell and the nucleus is found to dominate light scattering in the forward directions, whereas the random fluctuation of the background refractive index within the cell, behaving as a fractal random continuous medium, is found to dominate light scattering at other angles. Angularly dependent elastic light scattering spectroscopy aided by the unified Mie and fractal model is demonstrated to be an effective noninvasive approach to characterize biological cells and their internal structures. The acetowhitening effect induced by applying acetic acid on epithelial cells is investigated as an example. The changes in morphology and refractive index of epithelial cells, nuclei, and subcellular structures after the application of acetic acid are successfully probed and quantified using the proposed approach. The unified Mie and fractal model may serve as the foundation for optical detection of precancerous and cancerous changes in biological cells and tissues based on light scattering techniques.
Statistical mapping of count survey data
Royle, J. Andrew; Link, W.A.; Sauer, J.R.; Scott, J. Michael; Heglund, Patricia J.; Morrison, Michael L.; Haufler, Jonathan B.; Wall, William A.
2002-01-01
We apply a Poisson mixed model to the problem of mapping (or predicting) bird relative abundance from counts collected from the North American Breeding Bird Survey (BBS). The model expresses the logarithm of the Poisson mean as a sum of a fixed term (which may depend on habitat variables) and a random effect which accounts for remaining unexplained variation. The random effect is assumed to be spatially correlated, thus providing a more general model than the traditional Poisson regression approach. Consequently, the model is capable of improved prediction when data are autocorrelated. Moreover, formulation of the mapping problem in terms of a statistical model facilitates a wide variety of inference problems which are cumbersome or even impossible using standard methods of mapping. For example, assessment of prediction uncertainty, including the formal comparison of predictions at different locations, or through time, using the model-based prediction variance is straightforward under the Poisson model (not so with many nominally model-free methods). Also, ecologists may generally be interested in quantifying the response of a species to particular habitat covariates or other landscape attributes. Proper accounting for the uncertainty in these estimated effects is crucially dependent on specification of a meaningful statistical model. Finally, the model may be used to aid in sampling design, by modifying the existing sampling plan in a manner which minimizes some variance-based criterion. Model fitting under this model is carried out using a simulation technique known as Markov Chain Monte Carlo. Application of the model is illustrated using Mourning Dove (Zenaida macroura) counts from Pennsylvania BBS routes. We produce both a model-based map depicting relative abundance, and the corresponding map of prediction uncertainty. We briefly address the issue of spatial sampling design under this model. Finally, we close with some discussion of mapping in relation to habitat structure. Although our models were fit in the absence of habitat information, the resulting predictions show a strong inverse relation with a map of forest cover in the state, as expected. Consequently, the results suggest that the correlated random effect in the model is broadly representing ecological variation, and that BBS data may be generally useful for studying bird-habitat relationships, even in the presence of observer errors and other widely recognized deficiencies of the BBS.
Robustness of networks with assortative dependence groups
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Ming; Deng, Lin; Wang, Bing-Hong
2018-07-01
Assortativity is one of the important characteristics in real networks. To study the effects of this characteristic on the robustness of networks, we propose a percolation model on networks with assortative dependence group. The assortativity in this model means that the nodes with the same or similar degrees form dependence groups, for which one node fails, other nodes in the same group are very likely to fail. We find that the assortativity makes the nodes with large degrees easier to survive from the cascading failure. In this way, such networks are more robust than that with random dependence group, which also proves the assortative network is robust in another perspective. Furthermore, we also present exact solutions to the size of the giant component and the critical point, which are in agreement with the simulation results well.
Véronneau, Marie-Hélène; Dishion, Thomas J; Connell, Arin M; Kavanagh, Kathryn
2016-06-01
Substance use in adulthood compromises work, relationships, and health. Prevention strategies in early adolescence are designed to reduce substance use and progressions to problematic use by adulthood. This report examines the long-term effects of offering Family Check-up (FCU) at multiple time points in secondary education on the progression of substance use from age 11 to 23 years. Participants (N = 998; 472 females) were randomly assigned individuals to intervention or control in Grade 6 and offered a multilevel intervention that included a classroom-based intervention (universal), the FCU (selected), and tailored family management treatment (indicated). Among intervention families, 23% engaged in the selected and indicated levels during middle school. Intention to treat analyses revealed that randomization to the FCU was associated with reduced growth in marijuana use (p < .05), but not alcohol and tobacco use. We also examined whether engagement in the voluntary FCU services moderated the effect of the intervention on substance use progressions using complier average causal effect (CACE) modeling, and found that engagement in the FCU services predicted reductions in alcohol, tobacco, and marijuana use by age 23. In comparing engagers with nonengagers: 70% versus 95% showed signs of alcohol abuse or dependence, 28% versus 61% showed signs of tobacco dependence, and 59% versus 84% showed signs of marijuana abuse or dependence. Family interventions that are embedded within public school systems can reach high-risk students and families and prevent progressions from exploration to problematic substance use through early adulthood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Krieger, Janice L; Neil, Jordan M; Strekalova, Yulia A; Sarge, Melanie A
2017-03-01
Improving informed consent to participate in randomized clinical trials (RCTs) is a key challenge in cancer communication. The current study examines strategies for enhancing randomization comprehension among patients with diverse levels of health literacy and identifies cognitive and affective predictors of intentions to participate in cancer RCTs. Using a post-test-only experimental design, cancer patients (n = 500) were randomly assigned to receive one of three message conditions for explaining randomization (ie, plain language condition, gambling metaphor, benign metaphor) or a control message. All statistical tests were two-sided. Health literacy was a statistically significant moderator of randomization comprehension (P = .03). Among participants with the lowest levels of health literacy, the benign metaphor resulted in greater comprehension of randomization as compared with plain language (P = .04) and control (P = .004) messages. Among participants with the highest levels of health literacy, the gambling metaphor resulted in greater randomization comprehension as compared with the benign metaphor (P = .04). A serial mediation model showed a statistically significant negative indirect effect of comprehension on behavioral intention through personal relevance of RCTs and anxiety associated with participation in RCTs (P < .001). The effectiveness of metaphors for explaining randomization depends on health literacy, with a benign metaphor being particularly effective for patients at the lower end of the health literacy spectrum. The theoretical model demonstrates the cognitive and affective predictors of behavioral intention to participate in cancer RCTs and offers guidance on how future research should employ communication strategies to improve the informed consent processes. © The Author 2016. Published by Oxford University Press.
Neil, Jordan M.; Strekalova, Yulia A.; Sarge, Melanie A.
2017-01-01
Abstract Background: Improving informed consent to participate in randomized clinical trials (RCTs) is a key challenge in cancer communication. The current study examines strategies for enhancing randomization comprehension among patients with diverse levels of health literacy and identifies cognitive and affective predictors of intentions to participate in cancer RCTs. Methods: Using a post-test-only experimental design, cancer patients (n = 500) were randomly assigned to receive one of three message conditions for explaining randomization (ie, plain language condition, gambling metaphor, benign metaphor) or a control message. All statistical tests were two-sided. Results: Health literacy was a statistically significant moderator of randomization comprehension (P = .03). Among participants with the lowest levels of health literacy, the benign metaphor resulted in greater comprehension of randomization as compared with plain language (P = .04) and control (P = .004) messages. Among participants with the highest levels of health literacy, the gambling metaphor resulted in greater randomization comprehension as compared with the benign metaphor (P = .04). A serial mediation model showed a statistically significant negative indirect effect of comprehension on behavioral intention through personal relevance of RCTs and anxiety associated with participation in RCTs (P < .001). Conclusions: The effectiveness of metaphors for explaining randomization depends on health literacy, with a benign metaphor being particularly effective for patients at the lower end of the health literacy spectrum. The theoretical model demonstrates the cognitive and affective predictors of behavioral intention to participate in cancer RCTs and offers guidance on how future research should employ communication strategies to improve the informed consent processes. PMID:27794035
Qi, Helena W.; Nakka, Priyanka; Chen, Connie; Radhakrishnan, Mala L.
2014-01-01
Macromolecular crowding within the cell can impact both protein folding and binding. Earlier models of cellular crowding focused on the excluded volume, entropic effect of crowding agents, which generally favors compact protein states. Recently, other effects of crowding have been explored, including enthalpically-related crowder–protein interactions and changes in solvation properties. In this work, we explore the effects of macromolecular crowding on the electrostatic desolvation and solvent-screened interaction components of protein–protein binding. Our simple model enables us to focus exclusively on the electrostatic effects of water depletion on protein binding due to crowding, providing us with the ability to systematically analyze and quantify these potentially intuitive effects. We use the barnase–barstar complex as a model system and randomly placed, uncharged spheres within implicit solvent to model crowding in an aqueous environment. On average, we find that the desolvation free energy penalties incurred by partners upon binding are lowered in a crowded environment and solvent-screened interactions are amplified. At a constant crowder density (fraction of total available volume occupied by crowders), this effect generally increases as the radius of model crowders decreases, but the strength and nature of this trend can depend on the water probe radius used to generate the molecular surface in the continuum model. In general, there is huge variation in desolvation penalties as a function of the random crowder positions. Results with explicit model crowders can be qualitatively similar to those using a lowered “effective” solvent dielectric to account for crowding, although the “best” effective dielectric constant will likely depend on multiple system properties. Taken together, this work systematically demonstrates, quantifies, and analyzes qualitative intuition-based insights into the effects of water depletion due to crowding on the electrostatic component of protein binding, and it provides an initial framework for future analyses. PMID:24915485
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
Kablinger, Anita S; Lindner, Marie A; Casso, Stephanie; Hefti, Franz; DeMuth, George; Fox, Barbara S; McNair, Lindsay A; McCarthy, Bruce G; Goeders, Nicholas E
2012-07-01
Although cocaine dependence affects an estimated 1.6 million people in the USA, there are currently no medications approved for the treatment of this disorder. Experiments performed in animal models have demonstrated that inhibitors of the stress response effectively reduce intravenous cocaine self-administration. This exploratory, double-blind, placebo-controlled study was designed to assess the safety and efficacy of combinations of the cortisol synthesis inhibitor metyrapone, and the benzodiazepine oxazepam, in 45 cocaine-dependent individuals. The subjects were randomized to a total daily dose of 500 mg metyrapone/20 mg oxazepam (low dose), a total daily dose of 1500 mg metyrapone/20 mg oxazepam (high dose), or placebo for 6 weeks of treatment. The outcome measures were a reduction in cocaine craving and associated cocaine use as determined by quantitative measurements of the cocaine metabolite benzoylecgonine (BE) in urine at all visits. Of the randomized subjects, 49% completed the study. The combination of metyrapone and oxazepam was well tolerated and tended to reduce cocaine craving and cocaine use, with significant reductions at several time points when controlling for baseline scores. These data suggest that further assessments of the ability of the metyrapone and oxazepam combination to support cocaine abstinence in cocaine-dependent subjects are warranted.
Effects of ignoring baseline on modeling transitions from intact cognition to dementia.
Yu, Lei; Tyas, Suzanne L; Snowdon, David A; Kryscio, Richard J
2009-07-01
This paper evaluates the effect of ignoring baseline when modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. Transitions among states are modeled by a discrete-time Markov chain having three transient (intact cognition, MCI, and GI) and two competing absorbing states (death and dementia). Transition probabilities depend on two covariates, age and the presence/absence of an apolipoprotein E-epsilon4 allele, through a multinomial logistic model with shared random effects. Results are illustrated with an application to the Nun Study, a cohort of 678 participants 75+ years of age at baseline and followed longitudinally with up to ten cognitive assessments per nun.
Geometrical effects on the electron residence time in semiconductor nano-particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koochi, Hakimeh; Ebrahimi, Fatemeh, E-mail: f-ebrahimi@birjand.ac.ir; Solar Energy Research Group, University of Birjand, Birjand
2014-09-07
We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ{sub r} in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r{sup 2} model) or through the whole particle (r{sup 3} model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW)more » simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ{sub r}. It has been observed that by increasing the coordination number n, the average value of electron residence time, τ{sup ¯}{sub r} rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ{sup ¯}{sub r} is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ{sup ¯}{sub r}. Our simulations indicate that for volume distribution of traps, τ{sup ¯}{sub r} scales as d{sup 2}. For a surface distribution of traps τ{sup ¯}{sub r} increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.« less
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
Space charge effects on the dielectric response of polymer nanocomposites
NASA Astrophysics Data System (ADS)
Shen, Zhong-Hui; Wang, Jian-Jun; Zhang, Xin; Lin, Yuanhua; Nan, Ce-Wen; Chen, Long-Qing; Shen, Yang
2017-08-01
Adding high-κ ceramic nanoparticles into polymers is a general strategy to improve the performances in energy storage. Classic effective medium theories may fail to predict the effective permittivity in polymer nanocomposites wherein the space charge effects are important. In this work, a computational model is developed to understand the space charge effects on the frequency-dependent dielectric properties including the real permittivity and the loss for polymer nanocomposites with both randomly distributed and aggregated nanoparticle fillers. It is found that the real permittivity of the SrTiO3/polyethylene (12% SrTiO3 in volume fraction) nanocomposite can be increased to as high as 60 when there is nanoparticle aggregation and the ion concentration in the bulk polymer is around 1016 cm-3. This model can be employed to quantitatively predict the frequency-dependent dielectric properties for polymer nanocomposites with arbitrary microstructures.
Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport
NASA Astrophysics Data System (ADS)
Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike
2017-04-01
Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.
Resonance, criticality, and emergence in city traffic investigated in cellular automaton models.
Varas, A; Cornejo, M D; Toledo, B A; Muñoz, V; Rogan, J; Zarama, R; Valdivia, J A
2009-11-01
The complex behavior that occurs when traffic lights are synchronized is studied for a row of interacting cars. The system is modeled through a cellular automaton. Two strategies are considered: all lights in phase and a "green wave" with a propagating green signal. It is found that the mean velocity near the resonant condition follows a critical scaling law. For the green wave, it is shown that the mean velocity scaling law holds even for random separation between traffic lights and is not dependent on the density. This independence on car density is broken when random perturbations are considered in the car velocity. Random velocity perturbations also have the effect of leading the system to an emergent state, where cars move in clusters, but with an average velocity which is independent of traffic light switching for large injection rates.
Plasmonic modes and extinction properties of a random nanocomposite cylinder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Afshin, E-mail: a.moradi@kut.ac.ir
We study the properties of surface plasmon-polariton waves of a random metal-dielectric nanocomposite cylinder, consisting of bulk metal embedded with dielectric nanoparticles. We use the Maxwell-Garnett formulation to model the effective dielectric function of the composite medium and show that there exist two surface mode bands. We investigate the extinction properties of the system, and obtain the dependence of the extinction spectrum on the nanoparticles’ shape and concentration as well as the cylinder radius and the incidence angle for both TE and TM polarization.
Power generation in random diode arrays
NASA Astrophysics Data System (ADS)
Shvydka, Diana; Karpov, V. G.
2005-03-01
We discuss nonlinear disordered systems, random diode arrays (RDAs), which can represent such objects as large-area photovoltaics and ion channels of biological membranes. Our numerical modeling has revealed several interesting properties of RDAs. In particular, the geometrical distribution of nonuniformities across a RDA has only a minor effect on its integral characteristics determined by RDA parameter statistics. In the meantime, the dispersion of integral characteristics vs system size exhibits a nontrivial scaling dependence. Our theoretical interpretation here remains limited and is based on the picture of eddy currents flowing through weak diodes in the RDA.
Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin
2006-01-01
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.
Rondeau, Virginie; Berhane, Kiros; Thomas, Duncan C
2005-04-15
A three-level model is proposed to simultaneously examine the effects of daily exposure to air pollution and individual risk factors on health outcomes without aggregating over subjects or time. We used a logistic transition model with random effects to take into account heterogeneity and overdispersion of the observations. A distributed lag structure for pollution has been included, assuming that the event on day t for a subject depends on the levels of air pollution for several preceding days. We illustrate this proposed model via detailed analysis of the effect of air pollution on school absenteeism based on data from the Southern California Children's Health Study.
A model for simulating random atmospheres as a function of latitude, season, and time
NASA Technical Reports Server (NTRS)
Campbell, J. W.
1977-01-01
An empirical stochastic computer model was developed with the capability of generating random thermodynamic profiles of the atmosphere below an altitude of 99 km which are characteristic of any given season, latitude, and time of day. Samples of temperature, density, and pressure profiles generated by the model are statistically similar to measured profiles in a data base of over 6000 rocket and high-altitude atmospheric soundings; that is, means and standard deviations of modeled profiles and their vertical gradients are in close agreement with data. Model-generated samples can be used for Monte Carlo simulations of aircraft or spacecraft trajectories to predict or account for the effects on a vehicle's performance of atmospheric variability. Other potential uses for the model are in simulating pollutant dispersion patterns, variations in sound propagation, and other phenomena which are dependent on atmospheric properties, and in developing data-reduction software for satellite monitoring systems.
Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.
Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng
2016-01-01
Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.
Stochastic Control of Multi-Scale Networks: Modeling, Analysis and Algorithms
2014-10-20
Theory, (02 2012): 0. doi: B. T. Swapna, Atilla Eryilmaz, Ness B. Shroff. Throughput-Delay Analysis of Random Linear Network Coding for Wireless ... Wireless Sensor Networks and Effects of Long-Range Dependent Data, Sequential Analysis , (10 2012): 0. doi: 10.1080/07474946.2012.719435 Stefano...Sequential Analysis , (10 2012): 0. doi: John S. Baras, Shanshan Zheng. Sequential Anomaly Detection in Wireless Sensor Networks andEffects of Long
Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions
Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas
2012-01-01
We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742
Scalar and vector Keldysh models in the time domain
NASA Astrophysics Data System (ADS)
Kiselev, M. N.; Kikoin, K. A.
2009-04-01
The exactly solvable Keldysh model of disordered electron system in a random scattering field with extremely long correlation length is converted to the time-dependent model with extremely long relaxation. The dynamical problem is solved for the ensemble of two-level systems (TLS) with fluctuating well depths having the discrete Z 2 symmetry. It is shown also that the symmetric TLS with fluctuating barrier transparency may be described in terms of the vector Keldysh model with dime-dependent random planar rotations in xy plane having continuous SO(2) symmetry. Application of this model to description of dynamic fluctuations in quantum dots and optical lattices is discussed.
MODELING POROUS DUST GRAINS WITH BALLISTIC AGGREGATES. II. LIGHT SCATTERING PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yue; Draine, B. T.; Johnson, Eric T.
2009-05-10
We study the light scattering properties of random ballistic aggregates constructed in Shen et al. Using the discrete-dipole approximation, we compute the scattering phase function and linear polarization for random aggregates with various sizes and porosities, and with two different compositions: 100% silicate and 50% silicate +50% graphite. We investigate the dependence of light scattering properties on wavelength, cluster size, and porosity using these aggregate models. We find that while the shape of the phase function depends mainly on the size parameter of the aggregates, the linear polarization depends on both the size parameter and the porosity of the aggregates,more » with increasing degree of polarization as the porosity increases. Contrary to previous studies, we argue that the monomer size has negligible effects on the light scattering properties of ballistic aggregates, as long as the constituent monomer is smaller than the incident wavelength up to 2{pi}a {sub 0}/{lambda} {approx} 1.6 where a {sub 0} is the monomer radius. Previous claims for such monomer size effects are in fact the combined effects of size parameter and porosity. Finally, we present aggregate models that can reproduce the phase function and polarization of scattered light from the AU Mic debris disk and from cometary dust, including the negative polarization observed for comets at scattering angles 160 deg. {approx}< {theta} < 180 deg. These aggregates have moderate porosities, P{approx}0.6, and are of sub-{mu}m size for the debris disk case, or {mu}m size for the comet case.« less
Aguero-Valverde, Jonathan
2013-01-01
In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wing, Coady; Cook, Thomas D.
2013-01-01
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
Põder, Endel
2011-02-16
Dot lattices are very simple multi-stable images where the dots can be perceived as being grouped in different ways. The probabilities of grouping along different orientations as dependent on inter-dot distances along these orientations can be predicted by a simple quantitative model. L. Bleumers, P. De Graef, K. Verfaillie, and J. Wagemans (2008) found that for peripheral presentation, this model should be combined with random guesses on a proportion of trials. The present study shows that the probability of random responses decreases with decreasing ambiguity of lattices and is different for bi-stable and tri-stable lattices. With central presentation, similar effects can be produced by adding positional noise to the dots. The results suggest that different levels of internal positional noise might explain the differences between peripheral and central proximity grouping.
Robust inference in discrete hazard models for randomized clinical trials.
Nguyen, Vinh Q; Gillen, Daniel L
2012-10-01
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data
ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG
2013-01-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-05-01
A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.
Micromechanics-based magneto-elastic constitutive modeling of particulate composites
NASA Astrophysics Data System (ADS)
Yin, Huiming
Modified Green's functions are derived for three situations: a magnetic field caused by a local magnetization, a displacement field caused by a local body force and a displacement field caused by a local prescribed eigenstrain. Based on these functions, an explicit solution is derived for two magnetic particles embedded in the infinite medium under external magnetic and mechanical loading. A general solution for numerable magnetic particles embedded in an infinite domain is then provided in integral form. Two-phase composites containing spherical magnetic particles of the same size are considered for three kinds of microstructures. With chain-structured composites, particle interactions in the same chain are considered and a transversely isotropic effective elasticity is obtained. For periodic composites, an eight-particle interaction model is developed and provides a cubic symmetric effective elasticity. In the random composite, pair-wise particle interactions are integrated from all possible positions and an isotropic effective property is reached. This method is further extended to functionally graded composites. Magneto-mechanical behavior is studied for the chain-structured composite and the random composite. Effective magnetic permeability, effective magnetostriction and field-dependent effective elasticity are investigated. It is seen that the chain-structured composite is more sensitive to the magnetic field than the random composite; a composite consisting of only 5% of chain-structured particles can provide a larger magnetostriction and a larger change of effective elasticity than an equivalent composite consisting of 30% of random dispersed particles. Moreover, the effective shear modulus of the chain-structured composite rapidly increases with the magnetic field, while that for the random composite decreases. An effective hyperelastic constitutive model is further developed for a magnetostrictive particle-filled elastomer, which is sampled by using a network of body-centered cubic lattices of particles connected by macromolecular chains. The proposed hyperelastic model is able to characterize overall nonlinear elastic stress-stretch relations of the composites under general three-dimensional loading. It is seen that the effective strain energy density is proportional to the length of stretched chains in unit volume and volume fraction of particles.
Localized attacks on spatially embedded networks with dependencies.
Berezin, Yehiel; Bashan, Amir; Danziger, Michael M; Li, Daqing; Havlin, Shlomo
2015-03-11
Many real world complex systems such as critical infrastructure networks are embedded in space and their components may depend on one another to function. They are also susceptible to geographically localized damage caused by malicious attacks or natural disasters. Here, we study a general model of spatially embedded networks with dependencies under localized attacks. We develop a theoretical and numerical approach to describe and predict the effects of localized attacks on spatially embedded systems with dependencies. Surprisingly, we find that a localized attack can cause substantially more damage than an equivalent random attack. Furthermore, we find that for a broad range of parameters, systems which appear stable are in fact metastable. Though robust to random failures-even of finite fraction-if subjected to a localized attack larger than a critical size which is independent of the system size (i.e., a zero fraction), a cascading failure emerges which leads to complete system collapse. Our results demonstrate the potential high risk of localized attacks on spatially embedded network systems with dependencies and may be useful for designing more resilient systems.
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Rychlik, Igor; Mao, Wengang
2018-02-01
The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.
Massah, Omid; Sohrabi, Faramarz; A’azami, Yousef; Doostian, Younes; Farhoudian, Ali; Daneshmand, Reza
2016-01-01
Background Emotion plays an important role in adapting to life changes and stressful events. Difficulty regulating emotions is one of the problems drug abusers often face, and teaching these individuals to express and manage their emotions can be effective on improving their difficult circumstances. Objectives The present study aimed to determine the effectiveness of the Gross model-based emotion regulation strategies training on anger reduction in drug-dependent individuals. Patients and Methods The present study had a quasi-experimental design wherein pretest-posttest evaluations were applied using a control group. The population under study included addicts attending Marivan’s methadone maintenance therapy centers in 2012 - 2013. Convenience sampling was used to select 30 substance-dependent individuals undergoing maintenance treatment who were then randomly assigned to the experiment and control groups. The experiment group received its training in eight two-hour sessions. Data were analyzed using analysis of co-variance and paired t-test. Results There was significant reduction in anger symptoms of drug-dependent individuals after gross model based emotion regulation training (ERT) (P < 0.001). Moreover, the effectiveness of the training on anger was persistent in the follow-up period. Conclusions Symptoms of anger in drug-dependent individuals of this study were reduced by gross model-based emotion regulation strategies training. Based on the results of this study, we may conclude that the gross model based emotion regulation strategies training can be applied alongside other therapies to treat drug abusers undergoing rehabilitation. PMID:27162759
Time-evolution of grain size distributions in random nucleation and growth crystallization processes
NASA Astrophysics Data System (ADS)
Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.
2010-02-01
We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.
The First Order Correction to the Exit Distribution for Some Random Walks
NASA Astrophysics Data System (ADS)
Kennedy, Tom
2016-07-01
We study three different random walk models on several two-dimensional lattices by Monte Carlo simulations. One is the usual nearest neighbor random walk. Another is the nearest neighbor random walk which is not allowed to backtrack. The final model is the smart kinetic walk. For all three of these models the distribution of the point where the walk exits a simply connected domain D in the plane converges weakly to harmonic measure on partial D as the lattice spacing δ → 0. Let ω (0,\\cdot ;D) be harmonic measure for D, and let ω _δ (0,\\cdot ;D) be the discrete harmonic measure for one of the random walk models. Our definition of the random walk models is unusual in that we average over the orientation of the lattice with respect to the domain. We are interested in the limit of (ω _δ (0,\\cdot ;D)- ω (0,\\cdot ;D))/δ . Our Monte Carlo simulations of the three models lead to the conjecture that this limit equals c_{M,L} ρ _D(z) times Lebesgue measure with respect to arc length along the boundary, where the function ρ _D(z) depends on the domain, but not on the model or lattice, and the constant c_{M,L} depends on the model and on the lattice, but not on the domain. So there is a form of universality for this first order correction. We also give an explicit formula for the conjectured density ρ _D.
Effects of ignoring baseline on modeling transitions from intact cognition to dementia
Yu, Lei; Tyas, Suzanne L.; Snowdon, David A.; Kryscio, Richard J.
2009-01-01
This paper evaluates the effect of ignoring baseline when modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. Transitions among states are modeled by a discrete-time Markov chain having three transient (intact cognition, MCI, and GI) and two competing absorbing states (death and dementia). Transition probabilities depend on two covariates, age and the presence/absence of an apolipoprotein E-ε4 allele, through a multinomial logistic model with shared random effects. Results are illustrated with an application to the Nun Study, a cohort of 678 participants 75+ years of age at baseline and followed longitudinally with up to ten cognitive assessments per nun. PMID:20161282
NASA Astrophysics Data System (ADS)
Henri, Christopher; Fernàndez-Garcia, Daniel
2015-04-01
Modeling multi-species reactive transport in natural systems with strong heterogeneities and complex biochemical reactions is a major challenge for assessing groundwater polluted sites with organic and inorganic contaminants. A large variety of these contaminants react according to serial-parallel reaction networks commonly simplified by a combination of first-order kinetic reactions. In this context, a random-walk particle tracking method is presented. This method is capable of efficiently simulating the motion of particles affected by first-order network reactions in three-dimensional systems, which are represented by spatially variable physical and biochemical coefficients described at high resolution. The approach is based on the development of transition probabilities that describe the likelihood that particles belonging to a given species and location at a given time will be transformed into and moved to another species and location afterwards. These probabilities are derived from the solution matrix of the spatial moments governing equations. The method is fully coupled with reactions, free of numerical dispersion and overcomes the inherent numerical problems stemming from the incorporation of heterogeneities to reactive transport codes. In doing this, we demonstrate that the motion of particles follows a standard random walk with time-dependent effective retardation and dispersion parameters that depend on the initial and final chemical state of the particle. The behavior of effective parameters develops as a result of differential retardation effects among species. Moreover, explicit analytic solutions of the transition probability matrix and related particle motions are provided for serial reactions. An example of the effect of heterogeneity on the dechlorination of organic solvents in a three-dimensional random porous media shows that the power-law behavior typically observed in conservative tracers breakthrough curves can be largely compromised by the effect of biochemical reactions.
NASA Astrophysics Data System (ADS)
Henri, Christopher V.; Fernàndez-Garcia, Daniel
2014-09-01
Modeling multispecies reactive transport in natural systems with strong heterogeneities and complex biochemical reactions is a major challenge for assessing groundwater polluted sites with organic and inorganic contaminants. A large variety of these contaminants react according to serial-parallel reaction networks commonly simplified by a combination of first-order kinetic reactions. In this context, a random-walk particle tracking method is presented. This method is capable of efficiently simulating the motion of particles affected by first-order network reactions in three-dimensional systems, which are represented by spatially variable physical and biochemical coefficients described at high resolution. The approach is based on the development of transition probabilities that describe the likelihood that particles belonging to a given species and location at a given time will be transformed into and moved to another species and location afterward. These probabilities are derived from the solution matrix of the spatial moments governing equations. The method is fully coupled with reactions, free of numerical dispersion and overcomes the inherent numerical problems stemming from the incorporation of heterogeneities to reactive transport codes. In doing this, we demonstrate that the motion of particles follows a standard random walk with time-dependent effective retardation and dispersion parameters that depend on the initial and final chemical state of the particle. The behavior of effective parameters develops as a result of differential retardation effects among species. Moreover, explicit analytic solutions of the transition probability matrix and related particle motions are provided for serial reactions. An example of the effect of heterogeneity on the dechlorination of organic solvents in a three-dimensional random porous media shows that the power-law behavior typically observed in conservative tracers breakthrough curves can be largely compromised by the effect of biochemical reactions.
Random walk in degree space and the time-dependent Watts-Strogatz model
NASA Astrophysics Data System (ADS)
Casa Grande, H. L.; Cotacallapa, M.; Hase, M. O.
2017-01-01
In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.
Random walk in degree space and the time-dependent Watts-Strogatz model.
Casa Grande, H L; Cotacallapa, M; Hase, M O
2017-01-01
In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.
Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions
ERIC Educational Resources Information Center
Vuolo, Mike
2017-01-01
Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…
Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari
2017-09-01
Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.
Nutrition education intervention for dependent patients: protocol of a randomized controlled trial.
Arija, Victoria; Martín, Núria; Canela, Teresa; Anguera, Carme; Castelao, Ana I; García-Barco, Montserrat; García-Campo, Antoni; González-Bravo, Ana I; Lucena, Carme; Martínez, Teresa; Fernández-Barrés, Silvia; Pedret, Roser; Badia, Waleska; Basora, Josep
2012-05-24
Malnutrition in dependent patients has a high prevalence and can influence the prognosis associated with diverse pathologic processes, decrease quality of life, and increase morbidity-mortality and hospital admissions.The aim of the study is to assess the effect of an educational intervention for caregivers on the nutritional status of dependent patients at risk of malnutrition. Intervention study with control group, randomly allocated, of 200 patients of the Home Care Program carried out in 8 Primary Care Centers (Spain). These patients are dependent and at risk of malnutrition, older than 65, and have caregivers. The socioeconomic and educational characteristics of the patient and the caregiver are recorded. On a schedule of 0-6-12 months, patients are evaluated as follows: Mini Nutritional Assessment (MNA), food intake, dentures, degree of dependency (Barthel test), cognitive state (Pfeiffer test), mood status (Yesavage test), and anthropometric and serum parameters of nutritional status: albumin, prealbumin, transferrin, haemoglobin, lymphocyte count, iron, and ferritin.Prior to the intervention, the educational procedure and the design of educational material are standardized among nurses. The nurses conduct an initial session for caregivers and then monitor the education impact at home every month (4 visits) up to 6 months. The North American Nursing Diagnosis Association (NANDA) methodology will be used. The investigators will study the effect of the intervention with caregivers on the patient's nutritional status using the MNA test, diet, anthropometry, and biochemical parameters.Bivariate normal test statistics and multivariate models will be created to adjust the effect of the intervention.The SPSS/PC program will be used for statistical analysis. The nutritional status of dependent patients has been little studied. This study allows us to know nutritional risk from different points of view: diet, anthropometry and biochemistry in dependent patients at nutritional risk and to assess the effect of a nutritional education intervention. The design with random allocation, inclusion of all patients, validated methods, caregivers' education and standardization between nurses allows us to obtain valuable information about nutritional status and prevention. Clinical Trial Registration-URL: http://www.clinicaltrials.gov. Unique identifier: NCT01360775.
Nutrition education intervention for dependent patients: protocol of a randomized controlled trial
2012-01-01
Background Malnutrition in dependent patients has a high prevalence and can influence the prognosis associated with diverse pathologic processes, decrease quality of life, and increase morbidity-mortality and hospital admissions. The aim of the study is to assess the effect of an educational intervention for caregivers on the nutritional status of dependent patients at risk of malnutrition. Methods/Design Intervention study with control group, randomly allocated, of 200 patients of the Home Care Program carried out in 8 Primary Care Centers (Spain). These patients are dependent and at risk of malnutrition, older than 65, and have caregivers. The socioeconomic and educational characteristics of the patient and the caregiver are recorded. On a schedule of 0–6–12 months, patients are evaluated as follows: Mini Nutritional Assessment (MNA), food intake, dentures, degree of dependency (Barthel test), cognitive state (Pfeiffer test), mood status (Yesavage test), and anthropometric and serum parameters of nutritional status: albumin, prealbumin, transferrin, haemoglobin, lymphocyte count, iron, and ferritin. Prior to the intervention, the educational procedure and the design of educational material are standardized among nurses. The nurses conduct an initial session for caregivers and then monitor the education impact at home every month (4 visits) up to 6 months. The North American Nursing Diagnosis Association (NANDA) methodology will be used. The investigators will study the effect of the intervention with caregivers on the patient’s nutritional status using the MNA test, diet, anthropometry, and biochemical parameters. Bivariate normal test statistics and multivariate models will be created to adjust the effect of the intervention. The SPSS/PC program will be used for statistical analysis. Discussion The nutritional status of dependent patients has been little studied. This study allows us to know nutritional risk from different points of view: diet, anthropometry and biochemistry in dependent patients at nutritional risk and to assess the effect of a nutritional education intervention. The design with random allocation, inclusion of all patients, validated methods, caregivers’ education and standardization between nurses allows us to obtain valuable information about nutritional status and prevention. Trial Registration number Clinical Trial Registration-URL: http://www.clinicaltrials.gov. Unique identifier: NCT01360775 PMID:22625878
Extensions to Multivariate Space Time Mixture Modeling of Small Area Cancer Data.
Carroll, Rachel; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Aregay, Mehreteab; Watjou, Kevin
2017-05-09
Oral cavity and pharynx cancer, even when considered together, is a fairly rare disease. Implementation of multivariate modeling with lung and bronchus cancer, as well as melanoma cancer of the skin, could lead to better inference for oral cavity and pharynx cancer. The multivariate structure of these models is accomplished via the use of shared random effects, as well as other multivariate prior distributions. The results in this paper indicate that care should be taken when executing these types of models, and that multivariate mixture models may not always be the ideal option, depending on the data of interest.
Generalized Effective Medium Theory for Particulate Nanocomposite Materials
Siddiqui, Muhammad Usama; Arif, Abul Fazal M.
2016-01-01
The thermal conductivity of particulate nanocomposites is strongly dependent on the size, shape, orientation and dispersion uniformity of the inclusions. To correctly estimate the effective thermal conductivity of the nanocomposite, all these factors should be included in the prediction model. In this paper, the formulation of a generalized effective medium theory for the determination of the effective thermal conductivity of particulate nanocomposites with multiple inclusions is presented. The formulated methodology takes into account all the factors mentioned above and can be used to model nanocomposites with multiple inclusions that are randomly oriented or aligned in a particular direction. The effect of inclusion dispersion non-uniformity is modeled using a two-scale approach. The applications of the formulated effective medium theory are demonstrated using previously published experimental and numerical results for several particulate nanocomposites. PMID:28773817
Leak, Tashara M; Swenson, Alison; Vickers, Zata; Mann, Traci; Mykerezi, Elton; Redden, Joseph P; Rendahl, Aaron; Reicks, Marla
2015-01-01
To test the effectiveness of behavioral economics strategies for increasing vegetable intake, variety, and liking among children residing in homes receiving food assistance. A randomized controlled trial with data collected at baseline, once weekly for 6 weeks, and at study conclusion. Family homes. Families with a child (9-12 years) will be recruited through community organizations and randomly assigned to an intervention (n = 36) or control (n = 10) group. The intervention group will incorporate a new behavioral economics strategy during home dinner meal occasions each week for 6 weeks. Strategies are simple and low-cost. The primary dependent variable will be child's dinner meal vegetable consumption based on weekly reports by caregivers. Fixed independent variables will include the strategy and week of strategy implementation. Secondary dependent variables will include vegetable liking and variety of vegetables consumed based on data collected at baseline and study conclusion. Mean vegetable intake for each strategy across families will be compared using a mixed-model analysis of variance with a random effect for child. In additionally, overall mean changes in vegetable consumption, variety, and liking will be compared between intervention and control groups. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Time distributions of solar energetic particle events: Are SEPEs really random?
NASA Astrophysics Data System (ADS)
Jiggens, P. T. A.; Gabriel, S. B.
2009-10-01
Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.
Structure of random discrete spacetime
NASA Technical Reports Server (NTRS)
Brightwell, Graham; Gregory, Ruth
1991-01-01
The usual picture of spacetime consists of a continuous manifold, together with a metric of Lorentzian signature which imposes a causal structure on the spacetime. A model, first suggested by Bombelli et al., is considered in which spacetime consists of a discrete set of points taken at random from a manifold, with only the causal structure on this set remaining. This structure constitutes a partially ordered set (or poset). Working from the poset alone, it is shown how to construct a metric on the space which closely approximates the metric on the original spacetime manifold, how to define the effective dimension of the spacetime, and how such quantities may depend on the scale of measurement. Possible desirable features of the model are discussed.
The structure of random discrete spacetime
NASA Technical Reports Server (NTRS)
Brightwell, Graham; Gregory, Ruth
1990-01-01
The usual picture of spacetime consists of a continuous manifold, together with a metric of Lorentzian signature which imposes a causal structure on the spacetime. A model, first suggested by Bombelli et al., is considered in which spacetime consists of a discrete set of points taken at random from a manifold, with only the causal structure on this set remaining. This structure constitutes a partially ordered set (or poset). Working from the poset alone, it is shown how to construct a metric on the space which closely approximates the metric on the original spacetime manifold, how to define the effective dimension of the spacetime, and how such quantities may depend on the scale of measurement. Possible desirable features of the model are discussed.
Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S
2016-12-01
Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.
Catalytic micromotor generating self-propelled regular motion through random fluctuation.
Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa
2013-07-21
Most of the current studies on nano∕microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.
Catalytic micromotor generating self-propelled regular motion through random fluctuation
NASA Astrophysics Data System (ADS)
Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa
2013-07-01
Most of the current studies on nano/microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.
Analysis of dependent scattering mechanism in hard-sphere Yukawa random media
NASA Astrophysics Data System (ADS)
Wang, B. X.; Zhao, C. Y.
2018-06-01
The structural correlations in the microscopic structures of random media can induce the dependent scattering mechanism and thus influence the optical scattering properties. Based on our recent theory on the dependent scattering mechanism in random media composed of discrete dipolar scatterers [B. X. Wang and C. Y. Zhao, Phys. Rev. A 97, 023836 (2018)], in this paper, we study the hard-sphere Yukawa random media, in order to further elucidate the role of structural correlations in the dependent scattering mechanism and hence optical scattering properties. Here, we consider charged colloidal suspensions, whose effective pair interaction between colloids is described by a screened Coulomb (Yukawa) potential. By means of adding salt ions, the pair interaction between the charged particles can be flexibly tailored and therefore the structural correlations are modified. It is shown that this strategy can affect the optical properties significantly. For colloidal TiO2 suspensions, the modification of electric and magnetic dipole excitations induced by the structural correlations can substantially influence the optical scattering properties, in addition to the far-field interference effect described by the structure factor. However, this modification is only slightly altered by different salt concentrations and is mainly because of the packing-density-dependent screening effect. On the other hand, for low refractive index colloidal polystyrene suspensions, the dependent scattering mechanism mainly involves the far-field interference effect, and the effective exciting field amplitude for the electric dipole almost remains unchanged under different structural correlations. The present study has profound implications for understanding the role of structural correlations in the dependent scattering mechanism.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
Dong, Chunjiao; Xie, Kun; Zeng, Jin; Li, Xia
2018-04-01
Highway safety laws aim to influence driver behaviors so as to reduce the frequency and severity of crashes, and their outcomes. For one specific highway safety law, it would have different effects on the crashes across severities. Understanding such effects can help policy makers upgrade current laws and hence improve traffic safety. To investigate the effects of highway safety laws on crashes across severities, multivariate models are needed to account for the interdependency issues in crash counts across severities. Based on the characteristics of the dependent variables, multivariate dynamic Tobit (MVDT) models are proposed to analyze crash counts that are aggregated at the state level. Lagged observed dependent variables are incorporated into the MVDT models to account for potential temporal correlation issues in crash data. The state highway safety law related factors are used as the explanatory variables and socio-demographic and traffic factors are used as the control variables. Three models, a MVDT model with lagged observed dependent variables, a MVDT model with unobserved random variables, and a multivariate static Tobit (MVST) model are developed and compared. The results show that among the investigated models, the MVDT models with lagged observed dependent variables have the best goodness-of-fit. The findings indicate that, compared to the MVST, the MVDT models have better explanatory power and prediction accuracy. The MVDT model with lagged observed variables can better handle the stochasticity and dependency in the temporal evolution of the crash counts and the estimated values from the model are closer to the observed values. The results show that more lives could be saved if law enforcement agencies can make a sustained effort to educate the public about the importance of motorcyclists wearing helmets. Motor vehicle crash-related deaths, injuries, and property damages could be reduced if states enact laws for stricter text messaging rules, higher speeding fines, older licensing age, and stronger graduated licensing provisions. Injury and PDO crashes would be significantly reduced with stricter laws prohibiting the use of hand-held communication devices and higher fines for drunk driving. Copyright © 2018 Elsevier Ltd. All rights reserved.
Angle-resolved Wigner time delay in atomic photoionization: The 4 d subshell of free and confined Xe
NASA Astrophysics Data System (ADS)
Mandal, A.; Deshmukh, P. C.; Kheifets, A. S.; Dolmatov, V. K.; Manson, S. T.
2017-11-01
The angular dependence of photoemission time delay for the inner n d3 /2 and n d5 /2 subshells of free and confined Xe is studied in the dipole relativistic random phase approximation. A finite spherical annular well potential is used to model the confinement due to fullerene C60 cage. Near cancellations in a variety of the dipole amplitudes, Cooper-like minima, are found. The effects of confinement on the angular dependence, primarily confinement resonances, are demonstrated and detailed.
ERIC Educational Resources Information Center
Rodriguez-Sanchez, Emiliano; Patino-Alonso, Maria C.; Mora-Simon, Sara; Gomez-Marcos, Manuel A.; Perez-Penaranda, Anibal; Losada-Baltar, Andres; Garcia-Ortiz, Luis
2013-01-01
Purpose: To assess, in the context of Primary Health Care (PHC), the effect of a psychological intervention in mental health among caregivers (CGs) of dependent relatives. Design and Methods: Randomized multicenter, controlled clinical trial. The 125 CGs included in the trial were receiving health care in PHC. Inclusion criteria: Identifying…
ERIC Educational Resources Information Center
Williamson, Brenda D.; Campbell-Whatley, Gloria D.; Lo, Ya-yu
2009-01-01
Group contingencies have the advantages of encouraging individual students to collectively feel responsible for appropriate and inappropriate classroom behaviors and have shown effectiveness in improving students' behavior. The purpose of this study was to investigate the effects of a random dependent group contingency on the on-task behaviors of…
Simulation of Cooling and Pressure Effects on Inflated Pahoehoe Lava Flows
NASA Technical Reports Server (NTRS)
Glaze, Lori S.; Baloga, Stephen M.
2016-01-01
Pahoehoe lobes are often emplaced by the advance of discrete toes accompanied by inflation of the lobe surface. Many random effects complicate modeling lobe emplacement, such as the location and orientation of toe breakouts, their dimensions, mechanical strength of the crust, micro-topography and a host of other factors. Models that treat the movement of lava parcels as a random walk have explained some of the overall features of emplacement. However, cooling of the surface and internal pressurization of the fluid interior has not been modeled. This work reports lobe simulations that explicitly incorporate 1) cooling of surface lava parcels, 2) the propensity of breakouts to occur at warmer margins that are mechanically weaker than cooler ones, and 3) the influence of internal pressurization associated with inflation. The surface temperature is interpreted as a surrogate for the mechanic strength of the crust at each location and is used to determine the probability of a lava parcel transfer from that location. When only surface temperature is considered, the morphology and dimensions of simulated lobes are indistinguishable from equiprobable simulations. However, inflation within a lobe transmits pressure to all connected fluid locations with the warmer margins being most susceptible to breakouts and expansion. Simulations accounting for internal pressurization feature morphologies and dimensions that are dramatically different from the equiprobable and temperature-dependent models. Even on flat subsurfaces the pressure-dependent model produces elongate lobes with distinct directionality. Observables such as topographic profiles, aspect ratios, and maximum extents should be readily distinguishable in the field.
Evaluation of some random effects methodology applicable to bird ringing data
Burnham, K.P.; White, Gary C.
2002-01-01
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.
Riaz, Muhammad; Lewis, Sarah; Coleman, Tim; Aveyard, Paul; West, Robert; Naughton, Felix; Ussher, Michael
2016-09-01
To examine the ability of different common measures of cigarette dependence to predict smoking cessation during pregnancy. Secondary analysis of data from a parallel-group randomized controlled trial of physical activity for smoking cessation. The outcomes were biochemically validated smoking abstinence at 4 weeks post-quit and end-of-pregnancy. Women identified as smokers in antenatal clinics in 13 hospital trusts predominantly in southern England, who were recruited to a smoking cessation trial. Of 789 pregnant smokers recruited, 784 were included in the analysis. Using random-effect logistic regression models, we analysed the effects of baseline measures of cigarette dependence, including numbers of cigarettes smoked daily, Fagerström Test of Cigarette Dependence (FTCD) score, the two FTCD subscales of Heaviness of Smoking Index (HSI) and non-Heaviness of Smoking Index (non-HSI), expired carbon monoxide (CO) level and urges to smoke (strength and frequency) on smoking cessation. Associations were adjusted for significant socio-demographic/health behaviour predictors and trial variables, and area under the receiver operating characteristic (ROC) curve was used to determine the predictive ability of the model for each measure of dependence. All the dependence variables predicted abstinence at 4 weeks and end-of-pregnancy. At 4 weeks, the adjusted odds ratio (OR) (95% confidence interval) for a unit standard deviation increase in FTCD was 0.59 (0.47-0.74), expired CO = 0.54 (0.41-0.71), number of cigarettes smoked per day 0.65 (0.51-0.84) and frequency of urges to smoke 0.79 (0.63-0.98); at end-of-pregnancy they were: 0.60 (0.45-0.81), 0.55 (0.37-0.80), 0.70 (0.49-0.98) and 0.69 (0.51-0.94), respectively. HSI and non-HSI exhibited similar results to the full FTCD. Four common measures of dependence, including number of cigarettes smoked per day, scores for Fagerström Test of Cigarette Dependence and frequency of urges and level of expired CO, all predicted smoking abstinence in the short term during pregnancy and at end-of-pregnancy with very similar predictive validity. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
SAR Image Change Detection Based on Fuzzy Markov Random Field Model
NASA Astrophysics Data System (ADS)
Zhao, J.; Huang, G.; Zhao, Z.
2018-04-01
Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.
Calibration of Predictor Models Using Multiple Validation Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Effects of noise on a computational model for disease states of mood disorders
NASA Astrophysics Data System (ADS)
Tobias Huber, Martin; Krieg, Jürgen-Christian; Braun, Hans Albert; Moss, Frank
2000-03-01
Nonlinear dynamics are currently proposed to explain the progressive course of recurrent mood disorders starting with isolated episodes and ending with accelerated irregular (``chaotic") mood fluctuations. Such a low-dimensional disease model is attractive because of its principal accordance with biological disease models, i.e. the kindling and biological rhythms model. However, most natural systems are nonlinear and noisy and several studies in the neuro- and physical sciences have demonstrated interesting cooperative behaviors arising from interacting random and deterministic dynamics. Here, we consider the effects of noise on a recent neurodynamical model for the timecourse of affective disorders (Huber et al.: Biological Psychiatry 1999;46:256-262). We describe noise effects on temporal patterns and mean episode frequencies of various in computo disease states. Our simulations demonstrate that noise can cause unstructured randomness or can maximize periodic order. The frequency of episode occurence can increase with noise but it can also remain unaffected or even can decrease. We show further that noise can make visible bifurcations before they would normally occur under deterministic conditions and we quantify this behavior with a recently developed statistical method. All these effects depend critically on both, the dynamic state and the noise intensity. Implications for neurobiology and course of mood disorders are discussed.
Photoinduced random molecular reorientation by nonradiative energy relaxation: An experimental test
NASA Astrophysics Data System (ADS)
Manzo, C.; Paparo, D.; Marrucci, L.
2004-11-01
By measuring the time-resolved fluorescence depolarization as a function of light excitation wavelength we address the question of a possible photoinduced orientational randomization of amino-anthraquinone dyes in liquid solutions. We find no significant dependence within the experimental uncertainties of both the initial molecule anisotropy and of the subsequent rotational diffusion dynamics on the photon energy. This indicates that this effect, if present, must be very small. A simple model of photoinduced local heating and corresponding enhanced rotational diffusion is in accordance with this result. This null result rules out some recent proposals that photoinduced local heating may contribute significantly to molecular reorientation effects in different materials. A small but statistically significant effect of photon energy is instead found in the excited-state lifetime of the dye.
A mixture model for bovine abortion and foetal survival.
Hanson, Timothy; Bedrick, Edward J; Johnson, Wesley O; Thurmond, Mark C
2003-05-30
The effect of spontaneous abortion on the dairy industry is substantial, costing the industry on the order of US dollars 200 million per year in California alone. We analyse data from a cohort study of nine dairy herds in Central California. A key feature of the analysis is the observation that only a relatively small proportion of cows will abort (around 10;15 per cent), so that it is inappropriate to analyse the time-to-abortion (TTA) data as if it were standard censored survival data, with cows that fail to abort by the end of the study treated as censored observations. We thus broaden the scope to consider the analysis of foetal lifetime distribution (FLD) data for the cows, with the dual goals of characterizing the effects of various risk factors on (i). the likelihood of abortion and, conditional on abortion status, on (ii). the risk of early versus late abortion. A single model is developed to accomplish both goals with two sets of specific herd effects modelled as random effects. Because multimodal foetal hazard functions are expected for the TTA data, both a parametric mixture model and a non-parametric model are developed. Furthermore, the two sets of analyses are linked because of anticipated dependence between the random herd effects. All modelling and inferences are accomplished using modern Bayesian methods. Copyright 2003 John Wiley & Sons, Ltd.
Effective electromagnetic properties of microheterogeneous materials with surface phenomena
NASA Astrophysics Data System (ADS)
Levin, Valery; Markov, Mikhail; Mousatov, Aleksandr; Kazatchenko, Elena; Pervago, Evgeny
2017-10-01
In this paper, we present an approach to calculate the complex dielectric permittivity of a micro-heterogeneous medium composed of non-conductive solid inclusions embedded into the conductive liquid continuous host. To take into account the surface effects, we approximate the inclusion by a layered ellipsoid consisting of a dielectric core and an infinitesimally thin outer shell corresponding to an electrical double layer (EDL). To predict the effective complex dielectric permittivity of materials with a high concentration of inclusions, we have modified the Effective Field Method (EFM) for the layered ellipsoidal particles with complex electrical properties. We present the results of complex permittivity calculations for the composites with randomly and parallel oriented ellipsoidal inclusions. To analyze the influence of surface polarization, we have accomplished modeling in a wide frequency range for different existing physic-chemical models of double electrical layer. The results obtained show that the tensor of effective complex permittivity of a micro-heterogeneous medium with surface effects has complicate dependences on the component electrical properties, spatial material texture, and the inclusion shape (ellipsoid aspect ratio) and size. The dispersion of dielectric permittivity corresponds to the frequency dependence for individual inclusion of given size, and does not depend on the inclusion concentration.
Modeling Achievement Trajectories when Attrition Is Informative
ERIC Educational Resources Information Center
Feldman, Betsy J.; Rabe-Hesketh, Sophia
2012-01-01
In longitudinal education studies, assuming that dropout and missing data occur completely at random is often unrealistic. When the probability of dropout depends on covariates and observed responses (called "missing at random" [MAR]), or on values of responses that are missing (called "informative" or "not missing at random" [NMAR]),…
Random walkers with extreme value memory: modelling the peak-end rule
NASA Astrophysics Data System (ADS)
Harris, Rosemary J.
2015-05-01
Motivated by the psychological literature on the ‘peak-end rule’ for remembered experience, we perform an analysis within a random walk framework of a discrete choice model where agents’ future choices depend on the peak memory of their past experiences. In particular, we use this approach to investigate whether increased noise/disruption always leads to more switching between decisions. Here extreme value theory illuminates different classes of dynamics indicating that the long-time behaviour is dependent on the scale used for reflection; this could have implications, for example, in questionnaire design.
NASA Astrophysics Data System (ADS)
Nagar, Lokesh; Dutta, Pankaj; Jain, Karuna
2014-05-01
In the present day business scenario, instant changes in market demand, different source of materials and manufacturing technologies force many companies to change their supply chain planning in order to tackle the real-world uncertainty. The purpose of this paper is to develop a multi-objective two-stage stochastic programming supply chain model that incorporates imprecise production rate and supplier capacity under scenario dependent fuzzy random demand associated with new product supply chains. The objectives are to maximise the supply chain profit, achieve desired service level and minimise financial risk. The proposed model allows simultaneous determination of optimum supply chain design, procurement and production quantities across the different plants, and trade-offs between inventory and transportation modes for both inbound and outbound logistics. Analogous to chance constraints, we have used the possibility measure to quantify the demand uncertainties and the model is solved using fuzzy linear programming approach. An illustration is presented to demonstrate the effectiveness of the proposed model. Sensitivity analysis is performed for maximisation of the supply chain profit with respect to different confidence level of service, risk and possibility measure. It is found that when one considers the service level and risk as robustness measure the variability in profit reduces.
NASA Astrophysics Data System (ADS)
Wang, Lan; De Lucia, Gabriella; Weinmann, Simone M.
2013-05-01
The empirical traditional halo occupation distribution (HOD) model of Wang et al. fits, by construction, both the stellar mass function and correlation function of galaxies in the local Universe. In contrast, the semi-analytical models of De Lucia & Blazoit (hereafter DLB07) and Guo et al. (hereafter Guo11), built on the same dark matter halo merger trees than the empirical model, still have difficulties in reproducing these observational data simultaneously. We compare the relations between the stellar mass of galaxies and their host halo mass in the three models, and find that they are different. When the relations are rescaled to have the same median values and the same scatter as in Wang et al., the rescaled DLB07 model can fit both the measured galaxy stellar mass function and the correlation function measured in different galaxy stellar mass bins. In contrast, the rescaled Guo11 model still overpredicts the clustering of low-mass galaxies. This indicates that the detail of how galaxies populate the scatter in the stellar mass-halo mass relation does play an important role in determining the correlation functions of galaxies. While the stellar mass of galaxies in the Wang et al. model depends only on halo mass and is randomly distributed within the scatter, galaxy stellar mass depends also on the halo formation time in semi-analytical models. At fixed value of infall mass, galaxies that lie above the median stellar mass-halo mass relation reside in haloes that formed earlier, while galaxies that lie below the median relation reside in haloes that formed later. This effect is much stronger in Guo11 than in DLB07, which explains the overclustering of low mass galaxies in Guo11. Assembly bias in Guo11 model might be overly strong. Nevertheless, in case that a significant assembly bias indeed exists in the real Universe, one needs to use caution when applying current HOD and abundance matching models that employ the assumption of random scatter in the relation between stellar and halo mass.
Lewis, Daniel F.; Winhusen, Theresa
2016-01-01
Abstract Introduction: Smoking is highly prevalent in substance dependence, but smoking-cessation treatment (SCT) is more challenging in this population. To increase the success of smoking cessation services, it is important to understand potential therapeutic targets like nicotine craving that have meaningful but highly variable relationships with smoking outcomes. This study characterized the presence, magnitude, and specificity of nicotine craving as a mediator of the relationship between SCT and smoking abstinence in the context of stimulant-dependence treatment. Methods: This study was a secondary analysis of a randomized, 10-week trial conducted at 12 outpatient SUD treatment programs. Adults with cocaine and/or methamphetamine dependence ( N = 538) were randomized to SUD treatment as usual (TAU) or TAU+SCT. Participants reported nicotine craving, nicotine withdrawal symptoms, and substance use in the week following a uniform quit attempt of the TAU+SCT group, and self-reported smoking 7-day point prevalence abstinence (verified by carbon monoxide) at end-of-treatment. Results: Bootstrapped regression models indicated that, as expected, nicotine craving following a quit attempt mediated the relationship between SCT and end-of-treatment smoking point prevalence abstinence (mediation effect = 0.09, 95% CI = 0.04% to 0.14%, P < .05, 14% of total effect). Nicotine withdrawal symptoms and substance use were not significant mediators ( P s > .05, <1% of total effect). This pattern held for separate examinations of cocaine and methamphetamine dependence. Conclusions: Nicotine craving accounts for a small but meaningful portion of the relationship between smoking-cessation treatment and smoking abstinence during SUD treatment. Nicotine craving following a quit attempt may be a useful therapeutic target for increasing the effectiveness of smoking-cessation treatment in substance dependence. PMID:26048168
Relationships between nonlinear normal modes and response to random inputs
NASA Astrophysics Data System (ADS)
Schoneman, Joseph D.; Allen, Matthew S.; Kuether, Robert J.
2017-02-01
The ability to model nonlinear structures subject to random excitation is of key importance in designing hypersonic aircraft and other advanced aerospace vehicles. When a structure is linear, superposition can be used to construct its response to a known spectrum in terms of its linear modes. Superposition does not hold for a nonlinear system, but several works have shown that a system's dynamics can still be understood qualitatively in terms of its nonlinear normal modes (NNMs). This work investigates the connection between a structure's undamped nonlinear normal modes and the spectrum of its response to high amplitude random forcing. Two examples are investigated: a spring-mass system and a clamped-clamped beam modeled within a geometrically nonlinear finite element package. In both cases, an intimate connection is observed between the smeared peaks in the response spectrum and the frequency-energy dependence of the nonlinear normal modes. In order to understand the role of coupling between the underlying linear modes, reduced order models with and without modal coupling terms are used to separate the effect of each NNM's backbone from the nonlinear couplings that give rise to internal resonances. In the cases shown here, uncoupled, single-degree-of-freedom nonlinear models are found to predict major features in the response with reasonable accuracy; a highly inexpensive approximation such as this could be useful in design and optimization studies. More importantly, the results show that a reduced order model can be expected to give accurate results only if it is also capable of accurately predicting the frequency-energy dependence of the nonlinear modes that are excited.
Modeling of Internet Influence on Group Emotion
NASA Astrophysics Data System (ADS)
Czaplicka, Agnieszka; Hołyst, Janusz A.
Long-range interactions are introduced to a two-dimensional model of agents with time-dependent internal variables ei = 0, ±1 corresponding to valencies of agent emotions. Effects of spontaneous emotion emergence and emotional relaxation processes are taken into account. The valence of agent i depends on valencies of its four nearest neighbors but it is also influenced by long-range interactions corresponding to social relations developed for example by Internet contacts to a randomly chosen community. Two types of such interactions are considered. In the first model the community emotional influence depends only on the sign of its temporary emotion. When the coupling parameter approaches a critical value a phase transition takes place and as result for larger coupling constants the mean group emotion of all agents is nonzero over long time periods. In the second model the community influence is proportional to magnitude of community average emotion. The ordered emotional phase was here observed for a narrow set of system parameters.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.
Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M
2016-02-01
Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.
Direct Simulation of Extinction in a Slab of Spherical Particles
NASA Technical Reports Server (NTRS)
Mackowski, D.W.; Mishchenko, Michael I.
2013-01-01
The exact multiple sphere superposition method is used to calculate the coherent and incoherent contributions to the ensemble-averaged electric field amplitude and Poynting vector in systems of randomly positioned nonabsorbing spherical particles. The target systems consist of cylindrical volumes, with radius several times larger than length, containing spheres with positional configurations generated by a Monte Carlo sampling method. Spatially dependent values for coherent electric field amplitude, coherent energy flux, and diffuse energy flux, are calculated by averaging of exact local field and flux values over multiple configurations and over spatially independent directions for fixed target geometry, sphere properties, and sphere volume fraction. Our results reveal exponential attenuation of the coherent field and the coherent energy flux inside the particulate layer and thereby further corroborate the general methodology of the microphysical radiative transfer theory. An effective medium model based on plane wave transmission and reflection by a plane layer is used to model the dependence of the coherent electric field on particle packing density. The effective attenuation coefficient of the random medium, computed from the direct simulations, is found to agree closely with effective medium theories and with measurements. In addition, the simulation results reveal the presence of a counter-propagating component to the coherent field, which arises due to the internal reflection of the main coherent field component by the target boundary. The characteristics of the diffuse flux are compared to, and found to be consistent with, a model based on the diffusion approximation of the radiative transfer theory.
Roughness Effects on Fretting Fatigue
NASA Astrophysics Data System (ADS)
Yue, Tongyan; Abdel Wahab, Magd
2017-05-01
Fretting is a small oscillatory relative motion between two normal loaded contact surfaces. It may cause fretting fatigue, fretting wear and/or fretting corrosion damage depending on various fretting couples and working conditions. Fretting fatigue usually occurs at partial slip condition, and results in catastrophic failure at the stress levels below the fatigue limit of the material. Many parameters may affect fretting behaviour, including the applied normal load and displacement, material properties, roughness of the contact surfaces, frequency, etc. Since fretting damage is undesirable due to contacting, the effect of rough contact surfaces on fretting damage has been studied by many researchers. Experimental method on this topic is usually focusing on rough surface effects by finishing treatment and random rough surface effects in order to increase fretting fatigue life. However, most of numerical models on roughness are based on random surface. This paper reviewed both experimental and numerical methodology on the rough surface effects on fretting fatigue.
Dynamic simulation of crime perpetration and reporting to examine community intervention strategies.
Yonas, Michael A; Burke, Jessica G; Brown, Shawn T; Borrebach, Jeffrey D; Garland, Richard; Burke, Donald S; Grefenstette, John J
2013-10-01
To develop a conceptual computational agent-based model (ABM) to explore community-wide versus spatially focused crime reporting interventions to reduce community crime perpetrated by youth. Agents within the model represent individual residents and interact on a two-dimensional grid representing an abstract nonempirically grounded community setting. Juvenile agents are assigned initial random probabilities of perpetrating a crime and adults are assigned random probabilities of witnessing and reporting crimes. The agents' behavioral probabilities modify depending on the individual's experience with criminal behavior and punishment, and exposure to community crime interventions. Cost-effectiveness analyses assessed the impact of activating different percentages of adults to increase reporting and reduce community crime activity. Community-wide interventions were compared with spatially focused interventions, in which activated adults were focused in areas of highest crime prevalence. The ABM suggests that both community-wide and spatially focused interventions can be effective in reducing overall offenses, but their relative effectiveness may depend on the intensity and cost of the interventions. Although spatially focused intervention yielded localized reductions in crimes, such interventions were shown to move crime to nearby communities. Community-wide interventions can achieve larger reductions in overall community crime offenses than spatially focused interventions, as long as sufficient resources are available. The ABM demonstrates that community-wide and spatially focused crime strategies produce unique intervention dynamics influencing juvenile crime behaviors through the decisions and actions of community adults. It shows how such models might be used to investigate community-supported crime intervention programs by integrating community input and expertise and provides a simulated setting for assessing dimensions of cost comparison and intervention effect sustainability. ABM illustrates how intervention models might be used to investigate community-supported crime intervention programs.
A Spatial Poisson Hurdle Model for Exploring Geographic Variation in Emergency Department Visits
Neelon, Brian; Ghosh, Pulak; Loebs, Patrick F.
2012-01-01
Summary We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina. PMID:23543242
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Do diabetes group visits lead to lower medical care charges?
Clancy, Dawn E; Dismuke, Clara E; Magruder, Kathryn Marley; Simpson, Kit N; Bradford, David
2008-01-01
To evaluate whether attending diabetes group visits (GVs) leads to lower medical care charges for inadequately insured patients with type 2 diabetes mellitus (DM). Randomized controlled clinical trial. Data were abstracted from financial records for 186 patients with uncontrolled type 2 DM randomized to receive care in GVs or usual care for 12 months. Mann-Whitney tests for differences of means for outpatient visits (primary and specialty care), emergency department (ED) visits, and inpatient stays were performed. Separate charge models were developed for primary and specialty outpatient visits. Because GV adherence is potentially dependent on unobserved patient characteristics, treatment effect models of outpatient charges and specialty care visits were estimated using maximum likelihood methods. Mann-Whitney test results indicated that GV patients had reduced ED and total charges but more outpatient charges than usual care patients. Ordinary least squares estimations confirmed that GVs increased outpatient visit charges; however, controlling for endogeneity by estimating a treatment effect model of outpatient visit charges showed that GVs statistically significantly reduced outpatient charges (P <.001). Estimation of a separate treatment effect model of specialty care visits confirmed that GV effects on outpatient visit charges occurred via a reduction in specialty care visits. After controlling for endogeneity via estimation of a treatment effect model, GVs statistically significantly reduced outpatient visit charges. Estimation of a separate treatment effect model of specialty care visits indicated that GVs likely substitute for more expensive specialty care visits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui
Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less
Failure and recovery in dynamical networks.
Böttcher, L; Luković, M; Nagler, J; Havlin, S; Herrmann, H J
2017-02-03
Failure, damage spread and recovery crucially underlie many spatially embedded networked systems ranging from transportation structures to the human body. Here we study the interplay between spontaneous damage, induced failure and recovery in both embedded and non-embedded networks. In our model the network's components follow three realistic processes that capture these features: (i) spontaneous failure of a component independent of the neighborhood (internal failure), (ii) failure induced by failed neighboring nodes (external failure) and (iii) spontaneous recovery of a component. We identify a metastable domain in the global network phase diagram spanned by the model's control parameters where dramatic hysteresis effects and random switching between two coexisting states are observed. This dynamics depends on the characteristic link length of the embedded system. For the Euclidean lattice in particular, hysteresis and switching only occur in an extremely narrow region of the parameter space compared to random networks. We develop a unifying theory which links the dynamics of our model to contact processes. Our unifying framework may help to better understand controllability in spatially embedded and random networks where spontaneous recovery of components can mitigate spontaneous failure and damage spread in dynamical networks.
Cummins, Sharon; Zhu, Shu-Hong; Gamst, Anthony; Kirby, Carrie; Brandstein, Kendra; Klonoff-Cohen, Hillary; Chaplin, Edward; Morris, Timothy; Seymann, Gregory; Lee, Joshua
2012-08-01
Hospitalized smokers often quit smoking, voluntarily or involuntarily; most relapse soon after discharge. Extended follow-up counseling can help prevent relapse. However, it is difficult for hospitals to provide follow-up and smokers rarely leave the hospital with quitting aids (for example, nicotine patches). This study aims to test a practical model in which hospitals work with a state cessation quitline. Hospital staff briefly intervene with smokers at bedside and refer them to the quitline. Depending on assigned condition, smokers may receive nicotine patches at discharge or extended quitline telephone counseling post-discharge. This project establishes a practical model that lends itself to broader dissemination, while testing the effectiveness of the interventions in a rigorous randomized trial. This randomized clinical trial (N = 1,640) tests the effect of two interventions on long-term quit rates of hospitalized smokers in a 2 x 2 factorial design. The interventions are (1) nicotine patches (eight-week, step down program) dispensed at discharge and (2) proactive telephone counseling provided by the state quitline after discharge. Subjects are randomly assigned into: usual care, nicotine patches, telephone counseling, or both patches and counseling. It is hypothesized that patches and counseling have independent effects and their combined effect is greater than either alone. The primary outcome measure is thirty-day abstinence at six months; a secondary outcome is biochemically validated smoking status. Cost-effectiveness analysis is conducted to compare each intervention condition (patch alone, counseling alone, and combined interventions) against the usual care condition. Further, this study examines whether smokers' medical diagnosis is a moderator of treatment effect. Generalized linear (binomial) mixed models will be used to study the effect of treatment on abstinence rates. Clustering is accounted for with hospital-specific random effects. If this model is effective, quitlines across the U.S. could work with interested hospitals to set up similar systems. Hospital accreditation standards related to tobacco cessation performance measures require follow-up after discharge and provide additional incentive for hospitals to work with quitlines. The ubiquity of quitlines, combined with the consistency of quitline counseling delivery as centralized state operations, make this partnership attractive. Smoking cessation in hospitalized smokers NCT01289275. Date of registration February 1, 2011; date of first patient August 3, 2011.
Entropy of level-cut random Gaussian structures at different volume fractions
NASA Astrophysics Data System (ADS)
Marčelja, Stjepan
2017-10-01
Cutting random Gaussian fields at a given level can create a variety of morphologically different two- or several-phase structures that have often been used to describe physical systems. The entropy of such structures depends on the covariance function of the generating Gaussian random field, which in turn depends on its spectral density. But the entropy of level-cut structures also depends on the volume fractions of different phases, which is determined by the selection of the cutting level. This dependence has been neglected in earlier work. We evaluate the entropy of several lattice models to show that, even in the cases of strongly coupled systems, the dependence of the entropy of level-cut structures on molar fractions of the constituents scales with the simple ideal noninteracting system formula. In the last section, we discuss the application of the results to binary or ternary fluids and microemulsions.
Document page structure learning for fixed-layout e-books using conditional random fields
NASA Astrophysics Data System (ADS)
Tao, Xin; Tang, Zhi; Xu, Canhui
2013-12-01
In this paper, a model is proposed to learn logical structure of fixed-layout document pages by combining support vector machine (SVM) and conditional random fields (CRF). Features related to each logical label and their dependencies are extracted from various original Portable Document Format (PDF) attributes. Both local evidence and contextual dependencies are integrated in the proposed model so as to achieve better logical labeling performance. With the merits of SVM as local discriminative classifier and CRF modeling contextual correlations of adjacent fragments, it is capable of resolving the ambiguities of semantic labels. The experimental results show that CRF based models with both tree and chain graph structures outperform the SVM model with an increase of macro-averaged F1 by about 10%.
Spectral statistics of random geometric graphs
NASA Astrophysics Data System (ADS)
Dettmann, C. P.; Georgiou, O.; Knight, G.
2017-04-01
We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.
The effect of the neural activity on topological properties of growing neural networks.
Gafarov, F M; Gafarova, V R
2016-09-01
The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.
Synergistic effects in threshold models on networks.
Juul, Jonas S; Porter, Mason A
2018-01-01
Network structure can have a significant impact on the propagation of diseases, memes, and information on social networks. Different types of spreading processes (and other dynamical processes) are affected by network architecture in different ways, and it is important to develop tractable models of spreading processes on networks to explore such issues. In this paper, we incorporate the idea of synergy into a two-state ("active" or "passive") threshold model of social influence on networks. Our model's update rule is deterministic, and the influence of each meme-carrying (i.e., active) neighbor can-depending on a parameter-either be enhanced or inhibited by an amount that depends on the number of active neighbors of a node. Such a synergistic system models social behavior in which the willingness to adopt either accelerates or saturates in a way that depends on the number of neighbors who have adopted that behavior. We illustrate that our model's synergy parameter has a crucial effect on system dynamics, as it determines whether degree-k nodes are possible or impossible to activate. We simulate synergistic meme spreading on both random-graph models and networks constructed from empirical data. Using a heterogeneous mean-field approximation, which we derive under the assumption that a network is locally tree-like, we are able to determine which synergy-parameter values allow degree-k nodes to be activated for many networks and for a broad family of synergistic models.
Synergistic effects in threshold models on networks
NASA Astrophysics Data System (ADS)
Juul, Jonas S.; Porter, Mason A.
2018-01-01
Network structure can have a significant impact on the propagation of diseases, memes, and information on social networks. Different types of spreading processes (and other dynamical processes) are affected by network architecture in different ways, and it is important to develop tractable models of spreading processes on networks to explore such issues. In this paper, we incorporate the idea of synergy into a two-state ("active" or "passive") threshold model of social influence on networks. Our model's update rule is deterministic, and the influence of each meme-carrying (i.e., active) neighbor can—depending on a parameter—either be enhanced or inhibited by an amount that depends on the number of active neighbors of a node. Such a synergistic system models social behavior in which the willingness to adopt either accelerates or saturates in a way that depends on the number of neighbors who have adopted that behavior. We illustrate that our model's synergy parameter has a crucial effect on system dynamics, as it determines whether degree-k nodes are possible or impossible to activate. We simulate synergistic meme spreading on both random-graph models and networks constructed from empirical data. Using a heterogeneous mean-field approximation, which we derive under the assumption that a network is locally tree-like, we are able to determine which synergy-parameter values allow degree-k nodes to be activated for many networks and for a broad family of synergistic models.
Dispersive dielectric and conductive effects in 2D resistor-capacitor networks.
Hamou, R F; Macdonald, J R; Tuncer, E
2009-01-14
How to predict and better understand the effective properties of disordered material mixtures has been a long-standing problem in different research fields, especially in condensed matter physics. In order to address this subject and achieve a better understanding of the frequency-dependent properties of these systems, a large 2D L × L square structure of resistors and capacitors was used to calculate the immittance response of a network formed by random filling of binary conductor/insulator phases with 1000 Ω resistors and 10 nF capacitors. The effects of percolating clusters on the immittance response were studied statistically through the generation of 10 000 different random network samples at the percolation threshold. The scattering of the imaginary part of the immittance near the dc limit shows a clear separation between the responses of percolating and non-percolating samples, with the gap between their distributions dependent on both network size and applied frequency. These results could be used to monitor connectivity in composite materials. The effects of the content and structure of the percolating path on the nature of the observed dispersion were investigated, with special attention paid to the geometrical fractal concept of the backbone and its influence on the behavior of relaxation-time distributions. For three different resistor-capacitor proportions, the appropriateness of many fitting models was investigated for modeling and analyzing individual resistor-capacitor network dispersed frequency responses using complex-nonlinear-least-squares fitting. Several remarkable new features were identified, including a useful duality relationship and the need for composite fitting models rather than either a simple power law or a single Davidson-Cole one. Good fits of data for fully percolating random networks required two dispersive fitting models in parallel or series, with a cutoff at short times of the distribution of relaxation times of one of them. In addition, such fits surprisingly led to cutoff parameters, including a primitive relaxation or crossover time, with estimated values comparable to those found for real dispersive materials.
NASA Astrophysics Data System (ADS)
El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.
2007-11-01
We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.
Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S
2017-08-01
Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.
An improved non-Markovian degradation model with long-term dependency and item-to-item uncertainty
NASA Astrophysics Data System (ADS)
Xi, Xiaopeng; Chen, Maoyin; Zhang, Hanwen; Zhou, Donghua
2018-05-01
It is widely noted in the literature that the degradation should be simplified into a memoryless Markovian process for the purpose of predicting the remaining useful life (RUL). However, there actually exists the long-term dependency in the degradation processes of some industrial systems, including electromechanical equipments, oil tankers, and large blast furnaces. This implies the new degradation state depends not only on the current state, but also on the historical states. Such dynamic systems cannot be accurately described by traditional Markovian models. Here we present an improved non-Markovian degradation model with both the long-term dependency and the item-to-item uncertainty. As a typical non-stationary process with dependent increments, fractional Brownian motion (FBM) is utilized to simulate the fractal diffusion of practical degradations. The uncertainty among multiple items can be represented by a random variable of the drift. Based on this model, the unknown parameters are estimated through the maximum likelihood (ML) algorithm, while a closed-form solution to the RUL distribution is further derived using a weak convergence theorem. The practicability of the proposed model is fully verified by two real-world examples. The results demonstrate that the proposed method can effectively reduce the prediction error.
NASA Astrophysics Data System (ADS)
Chen, Guangzhi; Pageot, Damien; Legland, Jean-Baptiste; Abraham, Odile; Chekroun, Mathieu; Tournat, Vincent
2018-04-01
The spectral element method is used to perform a parametric sensitivity study of the nonlinear coda wave interferometry (NCWI) method in a homogeneous sample with localized damage [1]. The influence of a strong pump wave on a localized nonlinear damage zone is modeled as modifications to the elastic properties of an effective damage zone (EDZ), depending on the pump wave amplitude. The local change of the elastic modulus and the attenuation coefficient have been shown to vary linearly with respect to the excitation amplitude of the pump wave as in previous experimental studies of Zhang et al. [2]. In this study, the boundary conditions of the cracks, i.e. clapping effects is taken into account in the modeling of the damaged zone. The EDZ is then modeled with random cracks of random orientations, new parametric studies are established to model the pump wave influence with two new parameters: the change of the crack length and the crack density. The numerical results reported constitute another step towards quantification and forecasting of the nonlinear acoustic response of a cracked material, which proves to be necessary for quantitative non-destructive evaluation.
History dependent quantum random walks as quantum lattice gas automata
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shakeel, Asif, E-mail: asif.shakeel@gmail.com, E-mail: dmeyer@math.ucsd.edu, E-mail: plove@haverford.edu; Love, Peter J., E-mail: asif.shakeel@gmail.com, E-mail: dmeyer@math.ucsd.edu, E-mail: plove@haverford.edu; Meyer, David A., E-mail: asif.shakeel@gmail.com, E-mail: dmeyer@math.ucsd.edu, E-mail: plove@haverford.edu
Quantum Random Walks (QRW) were first defined as one-particle sectors of Quantum Lattice Gas Automata (QLGA). Recently, they have been generalized to include history dependence, either on previous coin (internal, i.e., spin or velocity) states or on previous position states. These models have the goal of studying the transition to classicality, or more generally, changes in the performance of quantum walks in algorithmic applications. We show that several history dependent QRW can be identified as one-particle sectors of QLGA. This provides a unifying conceptual framework for these models in which the extra degrees of freedom required to store the historymore » information arise naturally as geometrical degrees of freedom on the lattice.« less
NASA Astrophysics Data System (ADS)
Linh, Dang Khanh; Khanh, Nguyen Quoc
2018-03-01
We calculate the zero-temperature conductivity of bilayer graphene (BLG) impacted by Coulomb impurity scattering using four different screening models: unscreened, Thomas-Fermi (TF), overscreened and random phase approximation (RPA). We also calculate the conductivity and thermal conductance of BLG using TF, zero- and finite-temperature RPA screening functions. We find large differences between the results of the models and show that TF and finite-temperature RPA give similar results for diffusion thermopower Sd. Using the finite-temperature RPA, we calculate temperature and density dependence of Sd in BLG on SiO2, HfO2 substrates and suspended BLG for different values of interlayer distance c and distance between the first layer and the substrate d.
Effects of motivation on car-following
NASA Technical Reports Server (NTRS)
Boesser, T.
1982-01-01
Speed- and distance control by automobile-drivers is described best by linear models when the leading vehicles speed varies randomly and when the driver is motivated to keep a large distance. A car-following experiment required subjects to follow at 'safe' or at 'close' distance. Transfer-characteristics of the driver were extended by 1 octave when following 'closely'. Nonlinear properties of drivers control-movements are assumed to reflect different motivation-dependent control strategies.
Exploration properties of biased evanescent random walkers on a one-dimensional lattice
NASA Astrophysics Data System (ADS)
Esguerra, Jose Perico; Reyes, Jelian
2017-08-01
We investigate the combined effects of bias and evanescence on the characteristics of random walks on a one-dimensional lattice. We calculate the time-dependent return probability, eventual return probability, conditional mean return time, and the time-dependent mean number of visited sites of biased immortal and evanescent discrete-time random walkers on a one-dimensional lattice. We then extend the calculations to the case of a continuous-time step-coupled biased evanescent random walk on a one-dimensional lattice with an exponential waiting time distribution.
Random walks with shape prior for cochlea segmentation in ex vivo μCT.
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel
2016-09-01
Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
Miner, Daniel; Triesch, Jochen
2016-01-01
Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring. PMID:26866369
Luminosity distance in Swiss-cheese cosmology with randomized voids and galaxy halos
NASA Astrophysics Data System (ADS)
Flanagan, Éanna É.; Kumar, Naresh; Wasserman, Ira
2013-08-01
We study the fluctuations in luminosity distance due to gravitational lensing produced both by galaxy halos and large-scale voids. Voids are represented via a “Swiss-cheese” model consisting of a ΛCDM Friedmann-Robertson-Walker background from which a number of randomly distributed, spherical regions of comoving radius 35 Mpc are removed. A fraction of the removed mass is then placed on the shells of the spheres, in the form of randomly located halos. The halos are assumed to be nonevolving and are modeled with Navarro-Frenk-White profiles of a fixed mass. The remaining mass is placed in the interior of the spheres, either smoothly distributed or as randomly located halos. We compute the distribution of magnitude shifts using a variant of the method of Holz and Wald [Phys. Rev. D 58, 063501 (1998)], which includes the effect of lensing shear. In the two models we consider, the standard deviation of this distribution is 0.065 and 0.072 magnitudes and the mean is -0.0010 and -0.0013 magnitudes, for voids of radius 35 Mpc and the sources at redshift 1.5, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation due to voids and halos is a factor ˜3 larger than that due to 35 Mpc voids alone with a 1 Mpc shell thickness, which we studied in our previous work. We also study the effect of the existence of evacuated voids, by comparing to a model where all the halos are randomly distributed in the interior of the sphere with none on its surface. This does not significantly change the variance but does significantly change the demagnification tail. To a good approximation, the variance of the distribution depends only on the mean column density of halos (halo mass divided by its projected area), the concentration parameter of the halos, and the fraction of the mass density that is in the form of halos (as opposed to smoothly distributed); it is independent of how the halos are distributed in space. We derive an approximate analytic formula for the variance that agrees with our numerical results to ≲20% out to z≃1.5, and that can be used to study the dependence on halo parameters.
Context-dependent effects of background colour in free recall with spatially grouped words.
Sakai, Tetsuya; Isarida, Toshiko K; Isarida, Takeo
2010-10-01
Three experiments investigated context-dependent effects of background colour in free recall with groups of items. Undergraduates (N=113) intentionally studied 24 words presented in blocks of 6 on a computer screen with two different background colours. The two background colours were changed screen-by-screen randomly (random condition) or alternately (alternation condition) during the study period. A 30-second filled retention interval was imposed before an oral free-recall test. A signal for free recall was presented throughout the test on one of the colour background screens presented at study. Recalled words were classified as same- or different-context words according to whether the background colours at study and test were the same or different. The random condition produced significant context-dependent effects, whereas the alternation condition showed no context-dependent effects, regardless of whether the words were presented once or twice. Furthermore, the words presented on the same screen were clustered in recall, whereas the words presented against the same background colour but on different screens were not clustered. The present results imply: (1) background colours can cue spatially massed words; (2) background colours act as temporally local context; and (3) predictability of the next background colour modulates the context-dependent effect.
To, Minh-Son; Prakash, Shivesh; Poonnoose, Santosh I; Bihari, Shailesh
2018-05-01
The study uses meta-regression analysis to quantify the dose-dependent effects of statin pharmacotherapy on vasospasm, delayed ischemic neurologic deficits (DIND), and mortality in aneurysmal subarachnoid hemorrhage. Prospective, retrospective observational studies, and randomized controlled trials (RCTs) were retrieved by a systematic database search. Summary estimates were expressed as absolute risk (AR) for a given statin dose or control (placebo). Meta-regression using inverse variance weighting and robust variance estimation was performed to assess the effect of statin dose on transformed AR in a random effects model. Dose-dependence of predicted AR with 95% confidence interval (CI) was recovered by using Miller's Freeman-Tukey inverse. The database search and study selection criteria yielded 18 studies (2594 patients) for analysis. These included 12 RCTs, 4 retrospective observational studies, and 2 prospective observational studies. Twelve studies investigated simvastatin, whereas the remaining studies investigated atorvastatin, pravastatin, or pitavastatin, with simvastatin-equivalent doses ranging from 20 to 80 mg. Meta-regression revealed dose-dependent reductions in Freeman-Tukey-transformed AR of vasospasm (slope coefficient -0.00404, 95% CI -0.00720 to -0.00087; P = 0.0321), DIND (slope coefficient -0.00316, 95% CI -0.00586 to -0.00047; P = 0.0392), and mortality (slope coefficient -0.00345, 95% CI -0.00623 to -0.00067; P = 0.0352). The present meta-regression provides weak evidence for dose-dependent reductions in vasospasm, DIND and mortality associated with acute statin use after aneurysmal subarachnoid hemorrhage. However, the analysis was limited by substantial heterogeneity among individual studies. Greater dosing strategies are a potential consideration for future RCTs. Copyright © 2018 Elsevier Inc. All rights reserved.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Particle dynamics in a viscously decaying cat's eye: The effect of finite Schmidt numbers
NASA Astrophysics Data System (ADS)
Newton, P. K.; Meiburg, Eckart
1991-05-01
The dynamics and mixing of passive marker particles for the model problem of a decaying cat's eye flow is studied. The flow field corresponds to Stuart's one-parameter family of solutions [J. Fluid Mech. 29, 417 (1967)]. It is time dependent as a result of viscosity, which is modeled by allowing the free parameter to depend on time according to the self-similar solution of the Navier-Stokes equations for an isolated point vortex. Particle diffusion is numerically simulated by a random walk model. While earlier work had shown that, for small values of time over Reynolds number t/Re≪1, the interval length characterizing the formation of lobes of fluid escaping from the cat's eye scales as Re-1/2, the present study shows that, for the case of diffusive effects and t/Pe≪1, the scaling follows Pe-1/4. A simple argument, taking into account streamline convergence and divergence in different parts of the flow field, explains the Pe-1/4 scaling.
Opinion formation and distribution in a bounded-confidence model on various networks
NASA Astrophysics Data System (ADS)
Meng, X. Flora; Van Gorder, Robert A.; Porter, Mason A.
2018-02-01
In the social, behavioral, and economic sciences, it is important to predict which individual opinions eventually dominate in a large population, whether there will be a consensus, and how long it takes for a consensus to form. Such ideas have been studied heavily both in physics and in other disciplines, and the answers depend strongly both on how one models opinions and on the network structure on which opinions evolve. One model that was created to study consensus formation quantitatively is the Deffuant model, in which the opinion distribution of a population evolves via sequential random pairwise encounters. To consider heterogeneity of interactions in a population along with social influence, we study the Deffuant model on various network structures (deterministic synthetic networks, random synthetic networks, and social networks constructed from Facebook data). We numerically simulate the Deffuant model and conduct regression analyses to investigate the dependence of the time to reach steady states on various model parameters, including a confidence bound for opinion updates, the number of participating entities, and their willingness to compromise. We find that network structure and parameter values both have important effects on the convergence time and the number of steady-state opinion groups. For some network architectures, we observe that the relationship between the convergence time and model parameters undergoes a transition at a critical value of the confidence bound. For some networks, the steady-state opinion distribution also changes from consensus to multiple opinion groups at this critical value.
Wang, Junqi; Qin, Lan
2016-06-27
This meta-analysis was performed to compare radioiodine therapy with antithyroid drugs in terms of clinical outcomes, including development or worsening of ophthalmopathy, hyperthyroid cure rate, hypothyroidism, relapse rate and adverse events. Randomized controlled trials (RCTs) published in PubMed, Embase, Web of Science, SinoMed and National Knowledge Infrastructure, China, were systematically reviewed to compare the effects of radioiodine therapy with antithyroid drugs in patients with Graves' disease. Results were expressed as risk ratio with 95% confidence intervals (CIs) and weighted mean differences with 95% CIs. Pooled estimates were performed using a fixed-effects model or random-effects model, depending on the heterogeneity among studies. 17 RCTs involving 4024 patients met the inclusion criteria and were included. Results showed that radioiodine treatment has increased risk in new ophthalmopathy, development or worsening of ophthalmopathy and hypothyroidism. Whereas, compared with antithyroid drugs, radioiodine treatment seems to have a higher hyperthyroid cure rate, lower recurrence rate and lower incidence of adverse events. Radioiodine therapy is associated with a higher hyperthyroid cure rate and lower relapse rate compared with antithyroid drugs. However, it also increases the risk of ophthalmopathy and hypothyroidism. Considering that antithyroid drug treatment can be associated with unsatisfactory control of hyperthyroidism, we would recommend radioiodine therapy as the treatment of choice for patients with Graves' disease.
Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G
2011-06-28
We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.
A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva
2018-03-01
The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.
Buss, Arne; Wolf-Ostermann, Karin; Dassen, Theo; Lahmann, Nils; Strupeit, Steve
2016-04-01
Facilitating and maintaining functional status (FS) and quality of life (QoL) and avoiding care dependency (CD) are and will increasingly become major tasks of nursing. Educational nursing home visits may have positive effects on FS and QoL in older adults. The aim of this study was to determine the effectiveness of educational home visits on FS, QoL and CD in older adults with mobility impairments. We performed a randomized controlled trial. The study was conducted in the living environments of 123 participants with functional impairments living in Hamburg, Germany. The intervention group received an additional nursing education intervention on mobility and QoL; the control group received care as usual. Data were collected from August 2011 to December 2012 at baseline, 6 months and 12 months of follow-up. The main outcomes were FS (Barthel Index), QoL (WHOQOL-BREF) and CD (Care Dependency Scale). Data were analyzed using descriptive statistics and generalized linear models. In total, 113 participants (57 in the intervention and 56 in the control group) were included in the study. The intervention had no statistical significant effect on FS, QoL and CD. The intervention did not show the benefits that we assumed. Further studies on the effects of educational nursing interventions should be performed using different concepts and rigorous research methods. © 2015 John Wiley & Sons, Ltd.
Effective ergodicity breaking in an exclusion process with varying system length
NASA Astrophysics Data System (ADS)
Schultens, Christoph; Schadschneider, Andreas; Arita, Chikashi
2015-09-01
Stochastic processes of interacting particles in systems with varying length are relevant e.g. for several biological applications. We try to explore what kind of new physical effects one can expect in such systems. As an example, we extend the exclusive queueing process that can be viewed as a one-dimensional exclusion process with varying length, by introducing Langmuir kinetics. This process can be interpreted as an effective model for a queue that interacts with other queues by allowing incoming and leaving of customers in the bulk. We find surprising indications for breaking of ergodicity in a certain parameter regime, where the asymptotic growth behavior depends on the initial length. We show that a random walk with site-dependent hopping probabilities exhibits qualitatively the same behavior.
Relationships between nonlinear normal modes and response to random inputs
Schoneman, Joseph D.; Allen, Matthew S.; Kuether, Robert J.
2016-07-25
The ability to model nonlinear structures subject to random excitation is of key importance in designing hypersonic aircraft and other advanced aerospace vehicles. When a structure is linear, superposition can be used to construct its response to a known spectrum in terms of its linear modes. Superposition does not hold for a nonlinear system, but several works have shown that a system's dynamics can still be understood qualitatively in terms of its nonlinear normal modes (NNMs). Here, this work investigates the connection between a structure's undamped nonlinear normal modes and the spectrum of its response to high amplitude random forcing.more » Two examples are investigated: a spring-mass system and a clamped-clamped beam modeled within a geometrically nonlinear finite element package. In both cases, an intimate connection is observed between the smeared peaks in the response spectrum and the frequency-energy dependence of the nonlinear normal modes. In order to understand the role of coupling between the underlying linear modes, reduced order models with and without modal coupling terms are used to separate the effect of each NNM's backbone from the nonlinear couplings that give rise to internal resonances. In the cases shown here, uncoupled, single-degree-of-freedom nonlinear models are found to predict major features in the response with reasonable accuracy; a highly inexpensive approximation such as this could be useful in design and optimization studies. More importantly, the results show that a reduced order model can be expected to give accurate results only if it is also capable of accurately predicting the frequency-energy dependence of the nonlinear modes that are excited.« less
Pérez-Del-Olmo, A; Montero, F E; Fernández, M; Barrett, J; Raga, J A; Kostadinova, A
2010-10-01
We address the effect of spatial scale and temporal variation on model generality when forming predictive models for fish assignment using a new data mining approach, Random Forests (RF), to variable biological markers (parasite community data). Models were implemented for a fish host-parasite system sampled along the Mediterranean and Atlantic coasts of Spain and were validated using independent datasets. We considered 2 basic classification problems in evaluating the importance of variations in parasite infracommunities for assignment of individual fish to their populations of origin: multiclass (2-5 population models, using 2 seasonal replicates from each of the populations) and 2-class task (using 4 seasonal replicates from 1 Atlantic and 1 Mediterranean population each). The main results are that (i) RF are well suited for multiclass population assignment using parasite communities in non-migratory fish; (ii) RF provide an efficient means for model cross-validation on the baseline data and this allows sample size limitations in parasite tag studies to be tackled effectively; (iii) the performance of RF is dependent on the complexity and spatial extent/configuration of the problem; and (iv) the development of predictive models is strongly influenced by seasonal change and this stresses the importance of both temporal replication and model validation in parasite tagging studies.
Effects of deterministic and random refuge in a prey-predator model with parasite infection.
Mukhopadhyay, B; Bhattacharyya, R
2012-09-01
Most natural ecosystem populations suffer from various infectious diseases and the resulting host-pathogen dynamics is dependent on host's characteristics. On the other hand, empirical evidences show that for most host pathogen systems, a part of the host population always forms a refuge. To study the role of refuge on the host-pathogen interaction, we study a predator-prey-pathogen model where the susceptible and the infected prey can undergo refugia of constant size to evade predator attack. The stability aspects of the model system is investigated from a local and global perspective. The study reveals that the refuge sizes for the susceptible and the infected prey are the key parameters that control possible predator extinction as well as species co-existence. Next we perform a global study of the model system using Lyapunov functions and show the existence of a global attractor. Finally we perform a stochastic extension of the basic model to study the phenomenon of random refuge arising from various intrinsic, habitat-related and environmental factors. The stochastic model is analyzed for exponential mean square stability. Numerical study of the stochastic model shows that increasing the refuge rates has a stabilizing effect on the stochastic dynamics. Copyright © 2012 Elsevier Inc. All rights reserved.
de Bejczy, Andrea; Nations, Kari R; Szegedi, Armin; Schoemaker, Joep; Ruwe, Frank; Söderpalm, Bo
2014-09-01
Org 25935 is a glycine transporter inhibitor that increases extracellular glycine levels and attenuates alcohol-induced dopaminergic activity in the nucleus accumbens. In animal models, Org 25935 has dose-dependent effects on ethanol intake, preference, and relapse-like behavior without tolerance. The current study aimed to translate these animal findings to humans by examining whether Org 25935 prevents relapse in detoxified alcohol-dependent patients. This was a multicenter, randomized, double-blind, placebo-controlled clinical trial. Adult patients diagnosed with alcohol dependence were randomly assigned to receive Org 25935 12 mg twice a day or placebo for 84 days. The primary end point was percentage heavy drinking days (defined as ≥ 5 standard drinks per day for men and ≥ 4 for women). Secondary end points included other measures of relapse-related drinking behavior (e.g., drinks per day, time to relapse), as well as measures of global functioning, alcohol-related thoughts and cravings, and motivation. A total of 140 subjects were included in the intent-to-treat analysis. The trial was stopped approximately midway after a futility analysis showing that the likelihood of detecting a signal at study term was <40%. There was no significant difference between Org 25935 and placebo on percentage heavy drinking days or any other measure of relapse-related drinking behavior. Org 25935 showed no safety issues and was fairly well tolerated, with fatigue, dizziness, and transient visual events as the most commonly occurring side effects. Org 25935 demonstrated no benefit over placebo in preventing alcohol relapse. Study limitations and implications are discussed. Copyright © 2014 by the Research Society on Alcoholism.
Generated effect modifiers (GEM’s) in randomized clinical trials
Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R. Todd
2017-01-01
In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an “effect modifier”. Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. PMID:27465235
Random waves in the brain: Symmetries and defect generation in the visual cortex
NASA Astrophysics Data System (ADS)
Schnabel, M.; Kaschube, M.; Löwel, S.; Wolf, F.
2007-06-01
How orientation maps in the visual cortex of the brain develop is a matter of long standing debate. Experimental and theoretical evidence suggests that their development represents an activity-dependent self-organization process. Theoretical analysis [1] exploring this hypothesis predicted that maps at an early developmental stage are realizations of Gaussian random fields exhibiting a rigorous lower bound for their densities of topological defects, called pinwheels. As a consequence, lower pinwheel densities, if observed in adult animals, are predicted to develop through the motion and annihilation of pinwheel pairs. Despite of being valid for a large class of developmental models this result depends on the symmetries of the models and thus of the predicted random field ensembles. In [1] invariance of the orientation map's statistical properties under independent space rotations and orientation shifts was assumed. However, full rotation symmetry appears to be broken by interactions of cortical neurons, e.g. selective couplings between groups of neurons with collinear orientation preferences [2]. A recently proposed new symmetry, called shift-twist symmetry [3], stating that spatial rotations have to occur together with orientation shifts in order to be an appropriate symmetry transformation, is more consistent with this organization. Here we generalize our random field approach to this important symmetry class. We propose a new class of shift-twist symmetric Gaussian random fields and derive the general correlation functions of this ensemble. It turns out that despite strong effects of the shift-twist symmetry on the structure of the correlation functions and on the map layout the lower bound on the pinwheel densities remains unaffected, predicting pinwheel annihilation in systems with low pinwheel densities.
Baldi, F; Alencar, M M; Albuquerque, L G
2010-12-01
The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.
Population gratings in saturable optical fibers with randomly oriented rare-earth ions
NASA Astrophysics Data System (ADS)
Stepanov, S.; Martinez, L. M.; Hernandez, E. H.; Agruzov, P.; Shamray, A.
2015-07-01
Formation of the dynamic population gratings in optical fibers with randomly oriented rare-earth ions is analyzed with a special interest to the grating component for readout with the orthogonal light polarization. It is shown that as compared with a simple model case of the collinearly oriented dipole-like centers their random orientation leads to approximately 2-times growth of the effective saturation power P sat when it is estimated from the incident power dependence of the fiber absorption or from that of the fluorescence intensity. An optimal incident power, for which the maximum of the dynamic population grating amplitude for collinear light polarization is observed, also follows this change in P sat, while formation of the grating for orthogonal polarization needs essentially higher light power. The reduced anisotropy of the active centers, which is in charge of the experimentally observed weakening of the polarization hole burning (PHB) and of the fluorescence polarization, compensates in some way the effect of random ion orientation. The ratio between the maximum conventional (i.e. for the interacting waves collinear polarizations) two-wave mixing (TWM) amplitude and the initial not saturable fiber optical density proves to be, however, nearly the same as in the model case of collinearly oriented dipoles. The ratio between the PHB effect and the amplitude of the anisotropic grating, which is responsible for TWM of the orthogonally polarized waves, is also not influenced significantly by the reduced anisotropy of ions.
Extracting random numbers from quantum tunnelling through a single diode.
Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J
2017-12-19
Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.
NASA Astrophysics Data System (ADS)
Moslemipour, Ghorbanali
2018-07-01
This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.
Zhu, Mingming; Xu, Xitao; Nie, Fang; Tong, Jinlu; Xiao, Shudong; Ran, Zhihua
2011-08-01
The use of selective leukocytapheresis for the treatment of ulcerative colitis (UC) has been evaluated in several open and controlled trials, with varying outcomes. A meta-analysis was performed to better assess the efficacy and safety of selective leukocytapheresis as supplemental therapy compared with conventional pharmacotherapy in patients with UC. All randomized trials comparing selective leukocytapheresis supplementation with conventional pharmacotherapy were included from electronic databases and reference lists. A meta-analysis that pooled the outcome effects of leukocytapheresis and pharmacotherapy was performed. A fixed effect model or random effect model was selected depending on the heterogeneity test of the trials. Nine randomized controlled trials met the inclusion criteria contributing a total of 686 participants. Compared with conventional pharmacotherapy, leukocytapheresis supplementation presented a significant benefit in promoting a response rate (OR, 2.88, 95% CI: 1.60-5.18) and remission rate (OR, 2.04; 95% CI, 1.36-3.07) together with significant higher steroid-sparing effects (OR, 10.49; 95% CI, 3.44-31.93) in patients with active moderate-to-severe UC by intention-to-treat analysis. Leukocytapheresis was more effective in maintaining clinical remission for asymptomatic UC patients than conventional therapy (OR, 8.14; 95% CI, 2.22-29.90). The incidence of mild-moderate adverse effects was much less frequent in the leukocytapheresis groups than conventional pharmacotherapy groups (OR, 0.16; 95% CI, 0.04-0.60). Few severe adverse events were observed. Current data indicate that leukocytapheresis supplementation may be more efficacious on improving response and remission rates and tapering corticosteroid dosage with excellent tolerability and safety than conventional pharmacotherapy in patients with UC. In addition, more high-quality randomized controlled trials are required to confirm the higher efficacy of leukocytapheresis in patients with UC.
Zuur, J Karel; Muller, Sara H; Sinaasappel, Michiel; Hart, Guus A M; van Zandwijk, Nico; Hilgers, Frans J M
2007-12-01
High-resistance heat and moisture exchangers (HMEs) have been reported to increase transcutaneous oxygenation (tcpO(2)) values in laryngectomized individuals and to negatively influence patient compliance. The goal of the present study was to validate earlier published results on short-term transcutaneous oxygenation changes by high-resistance HMEs. We conducted a randomized crossover study, monitoring the influence of an HME on tcpO(2) over a 2-hour time interval in 20 subjects. No evidence of an immediate HME effect (95% CI: -14.9-13.3 mm Hg, p = .91), or a time-dependent HME effect (95% CI: -.121 - .172 mm Hg/minute, p = .74), on tcpO(2) was found. After fitting the statistical model without time dependency, again no evidence of HME presence was seen (95% CI: -.5 mm Hg - 3.6 mm Hg, p = .15). In contrast to earlier suggestions, there is no evidence of increased tcpO(2) levels by high-resistance HMEs in laryngectomized individuals. Thus, using such HMEs has no added clinical value in this respect.
Stochastic species abundance models involving special copulas
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2018-01-01
Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.
Recharge characteristics of an unconfined aquifer from the rainfall-water table relationship
NASA Astrophysics Data System (ADS)
Viswanathan, M. N.
1984-02-01
The determination of recharge levels of unconfined aquifers, recharged entirely by rainfall, is done by developing a model for the aquifer that estimates the water-table levels from the history of rainfall observations and past water-table levels. In the present analysis, the model parameters that influence the recharge were not only assumed to be time dependent but also to have varying dependence rates for various parameters. Such a model is solved by the use of a recursive least-squares method. The variable-rate parameter variation is incorporated using a random walk model. From the field tests conducted at Tomago Sandbeds, Newcastle, Australia, it was observed that the assumption of variable rates of time dependency of recharge parameters produced better estimates of water-table levels compared to that with constant-recharge parameters. It was observed that considerable recharge due to rainfall occurred on the very same day of rainfall. The increase in water-table level was insignificant for subsequent days of rainfall. The level of recharge very much depends upon the intensity and history of rainfall. Isolated rainfalls, even of the order of 25 mm day -1, had no significant effect on the water-table levels.
NASA Astrophysics Data System (ADS)
Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.
2018-01-01
This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.
Modeling missing data in knowledge space theory.
de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio
2015-12-01
Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data. (c) 2015 APA, all rights reserved).
Modeling climate change impacts on water trading.
Luo, Bin; Maqsood, Imran; Gong, Yazhen
2010-04-01
This paper presents a new method of evaluating the impacts of climate change on the long-term performance of water trading programs, through designing an indicator to measure the mean of periodic water volume that can be released by trading through a water-use system. The indicator is computed with a stochastic optimization model which can reflect the random uncertainty of water availability. The developed method was demonstrated in the Swift Current Creek watershed of Prairie Canada under two future scenarios simulated by a Canadian Regional Climate Model, in which total water availabilities under future scenarios were estimated using a monthly water balance model. Frequency analysis was performed to obtain the best probability distributions for both observed and simulated water quantity data. Results from the case study indicate that the performance of a trading system is highly scenario-dependent in future climate, with trading effectiveness highly optimistic or undesirable under different future scenarios. Trading effectiveness also largely depends on trading costs, with high costs resulting in failure of the trading program. (c) 2010 Elsevier B.V. All rights reserved.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
A null model for Pearson coexpression networks.
Gobbi, Andrea; Jurman, Giuseppe
2015-01-01
Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent simple but effective structures for discovering and interpreting linear gene relationships. In recent years, several approaches have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is most crucial when the number of samples is small, yielding a non-negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The threshold is theoretically derived by means of an analytic approach and, as a deterministic independent null model, it depends only on the dimensions of the starting data matrix, with assumptions on the skewness of the data distribution compatible with the structure of gene expression levels data. We show, on synthetic and array datasets, that the proposed threshold is effective in eliminating all false positive links, with an offsetting cost in terms of false negative detected edges.
Baird, Rachel; Maxwell, Scott E
2016-06-01
Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Royston, Patrick; Parmar, Mahesh K B
2014-08-07
Most randomized controlled trials with a time-to-event outcome are designed and analysed under the proportional hazards assumption, with a target hazard ratio for the treatment effect in mind. However, the hazards may be non-proportional. We address how to design a trial under such conditions, and how to analyse the results. We propose to extend the usual approach, a logrank test, to also include the Grambsch-Therneau test of proportional hazards. We test the resulting composite null hypothesis using a joint test for the hazard ratio and for time-dependent behaviour of the hazard ratio. We compute the power and sample size for the logrank test under proportional hazards, and from that we compute the power of the joint test. For the estimation of relevant quantities from the trial data, various models could be used; we advocate adopting a pre-specified flexible parametric survival model that supports time-dependent behaviour of the hazard ratio. We present the mathematics for calculating the power and sample size for the joint test. We illustrate the methodology in real data from two randomized trials, one in ovarian cancer and the other in treating cellulitis. We show selected estimates and their uncertainty derived from the advocated flexible parametric model. We demonstrate in a small simulation study that when a treatment effect either increases or decreases over time, the joint test can outperform the logrank test in the presence of both patterns of non-proportional hazards. Those designing and analysing trials in the era of non-proportional hazards need to acknowledge that a more complex type of treatment effect is becoming more common. Our method for the design of the trial retains the tools familiar in the standard methodology based on the logrank test, and extends it to incorporate a joint test of the null hypothesis with power against non-proportional hazards. For the analysis of trial data, we propose the use of a pre-specified flexible parametric model that can represent a time-dependent hazard ratio if one is present.
Frequency-dependent learning achieved using semiconducting polymer/electrolyte composite cells
NASA Astrophysics Data System (ADS)
Dong, W. S.; Zeng, F.; Lu, S. H.; Liu, A.; Li, X. J.; Pan, F.
2015-10-01
Frequency-dependent learning has been achieved using semiconducting polymer/electrolyte composite cells. The cells composed of polymer/electrolyte double layers realized the conventional spike-rate-dependent plasticity (SRDP) learning model. These cells responded to depression upon low-frequency stimulation and to potentiation upon high-frequency stimulation and presented long-term memory. The transition threshold θm from depression to potentiation varied depending on the previous stimulations. A nanostructure resembling a bio-synapse in its transport passages was demonstrated and a random channel model was proposed to describe the ionic kinetics at the polymer/electrolyte interface during and after stimulations with various frequencies, accounting for the observed SRDP.Frequency-dependent learning has been achieved using semiconducting polymer/electrolyte composite cells. The cells composed of polymer/electrolyte double layers realized the conventional spike-rate-dependent plasticity (SRDP) learning model. These cells responded to depression upon low-frequency stimulation and to potentiation upon high-frequency stimulation and presented long-term memory. The transition threshold θm from depression to potentiation varied depending on the previous stimulations. A nanostructure resembling a bio-synapse in its transport passages was demonstrated and a random channel model was proposed to describe the ionic kinetics at the polymer/electrolyte interface during and after stimulations with various frequencies, accounting for the observed SRDP. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02891d
Clerkin, Elise M; Magee, Joshua C; Wells, Tony T; Beard, Courtney; Barnett, Nancy P
2016-12-01
Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Adult participants (N = 86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. Copyright © 2016 Elsevier Ltd. All rights reserved.
Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.
2016-01-01
Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918
Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.
2013-01-01
Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430
Dwisaptarini, A P; Suebnukarn, S; Rhienmora, P; Haddawy, P; Koontongkaew, S
This work presents the multilayered caries model with a visuo-tactile virtual reality simulator and a randomized controlled trial protocol to determine the effectiveness of the simulator in training for minimally invasive caries removal. A three-dimensional, multilayered caries model was reconstructed from 10 micro-computed tomography (CT) images of deeply carious extracted human teeth before and after caries removal. The full grey scale 0-255 yielded a median grey scale value of 0-9, 10-18, 19-25, 26-52, and 53-80 regarding dental pulp, infected carious dentin, affected carious dentin, normal dentin, and normal enamel, respectively. The simulator was connected to two haptic devices for a handpiece and mouth mirror. The visuo-tactile feedback during the operation varied depending on the grey scale. Sixth-year dental students underwent a pretraining assessment of caries removal on extracted teeth. The students were then randomly assigned to train on either the simulator (n=16) or conventional extracted teeth (n=16) for 3 days, after which the assessment was repeated. The posttraining performance of caries removal improved compared with pretraining in both groups (Wilcoxon, p<0.05). The equivalence test for proportional differences (two 1-sided t-tests) with a 0.2 margin confirmed that the participants in both groups had identical posttraining performance scores (95% CI=0.92, 1; p=0.00). In conclusion, training on the micro-CT multilayered caries model with the visuo-tactile virtual reality simulator and conventional extracted tooth had equivalent effects on improving performance of minimally invasive caries removal.
Syed Ali, M; Vadivel, R; Saravanakumar, R
2018-06-01
This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Correlation effects during liquid infiltration into hydrophobic nanoporous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borman, V. D., E-mail: vdborman@mephi.ru; Belogorlov, A. A.; Byrkin, V. A.
To explain the thermal effects observed during the infiltration of a nonwetting liquid into a disordered nanoporous medium, we have constructed a model that includes correlation effects in a disordered medium. It is based on analytical methods of the percolation theory. The infiltration of a porous medium is considered as the infiltration of pores in an infinite cluster of interconnected pores. Using the model of randomly situated spheres (RSS), we have been able to take into account the correlation effect of the spatial arrangement and connectivity of pores in the medium. The other correlation effect of the mutual arrangement ofmore » filled and empty pores on the shell of an infinite percolation cluster of filled pores determines the infiltration fluctuation probability. This probability has been calculated analytically. Allowance for these correlation effects during infiltration and defiltration makes it possible to suggest a physical mechanism of the contact angle hysteresis and to calculate the dependences of the contact angles on the degree of infiltration, porosity of the medium, and temperature. Based on the suggested model, we have managed to describe the temperature dependences of the infiltration and defiltration pressures and the thermal effects that accompany the absorption of energy by disordered porous medium-nonwetting liquid systems with various porosities in a unified way.« less
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
Relationship between urbanization and CO2 emissions depends on income level and policy.
Ponce de Leon Barido, Diego; Marshall, Julian D
2014-04-01
We investigate empirically how national-level CO2 emissions are affected by urbanization and environmental policy. We use statistical modeling to explore panel data on annual CO2 emissions from 80 countries for the period 1983-2005. Random- and fixed-effects models indicate that, on the global average, the urbanization-emission elasticity value is 0.95 (i.e., a 1% increase in urbanization correlates with a 0.95% increase in emissions). Several regions display a statistically significant, positive elasticity for fixed- and random-effects models: lower-income Europe, India and the Sub-Continent, Latin America, and Africa. Using two proxies for environmental policy/outcomes (ratification status for the Kyoto Protocol; the Yale Environmental Performance Index), we find that in countries with stronger environmental policy/outcomes, urbanization has a more beneficial (or, a less negative) impact on emissions. Specifically, elasticity values are -1.1 (0.21) for higher-income (lower-income) countries with strong environmental policy, versus 0.65 (1.3) for higher-income (lower-income) countries with weak environmental policies. Our finding that the urbanization-emissions elasticity may depend on the strength of a country's environmental policy, not just marginal increases in income, is in contrast to the idea of universal urban scaling laws that can ignore local context. Most global population growth in the coming decades is expected to occur in urban areas of lower-income countries, which underscores the importance of these findings.
General Framework for Effect Sizes in Cluster Randomized Experiments
ERIC Educational Resources Information Center
VanHoudnos, Nathan
2016-01-01
Cluster randomized experiments are ubiquitous in modern education research. Although a variety of modeling approaches are used to analyze these data, perhaps the most common methodology is a normal mixed effects model where some effects, such as the treatment effect, are regarded as fixed, and others, such as the effect of group random assignment…
Modeling cometary photopolarimetric characteristics with Sh-matrix method
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
Flow and diffusion of high-stakes test scores.
Marder, M; Bansal, D
2009-10-13
We apply visualization and modeling methods for convective and diffusive flows to public school mathematics test scores from Texas. We obtain plots that show the most likely future and past scores of students, the effects of random processes such as guessing, and the rate at which students appear in and disappear from schools. We show that student outcomes depend strongly upon economic class, and identify the grade levels where flows of different groups diverge most strongly. Changing the effectiveness of instruction in one grade naturally leads to strongly nonlinear effects on student outcomes in subsequent grades.
ERIC Educational Resources Information Center
Williams, Geoffrey C.; Niemiec, Christopher P.; Patrick, Heather; Ryan, Richard M.; Deci, Edward L.
2016-01-01
A pragmatic comparative effectiveness trial examined whether extending the duration of a cost-effective, intensive tobacco-dependence intervention designed to support autonomy will facilitate long-term tobacco abstinence. Participants were randomly assigned to one of three tobacco-dependence interventions based on self-determination theory,…
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375
Nelson, Jon P
2014-01-01
Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.
De Lara, Michel
2006-05-01
In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.
Cariveau, Tom; Kodak, Tiffany
2017-01-01
Low levels of academic engagement may impede students' acquisition of skills. Intervening on student behavior using group contingencies may be a feasible way to increase academic engagement during group instruction. The current study examined the effect of a randomized dependent group contingency on levels of academic engagement for second-grade participants receiving small-group reading and writing instruction. The results showed that a randomized dependent group contingency increased the academic engagement of primary participants and several of the other participants during small-group instruction. The findings also showed that high levels of academic engagement were maintained when common stimuli were present and the dependent group contingency was withdrawn. © 2016 Society for the Experimental Analysis of Behavior.
Modeling of chromosome intermingling by partially overlapping uniform random polygons.
Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J
2011-03-01
During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.
A Gompertzian model with random effects to cervical cancer growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazlan, Mazma Syahidatul Ayuni; Rosli, Norhayati
2015-05-15
In this paper, a Gompertzian model with random effects is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via maximum likehood estimation. We apply 4-stage Runge-Kutta (SRK4) for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of the cervical cancer growth. Low values of root mean-square error (RMSE) of Gompertzian model with random effect indicate good fits.
Assessing age-dependent susceptibility to measles in Japan.
Kinoshita, Ryo; Nishiura, Hiroshi
2017-06-05
Routine vaccination against measles in Japan started in 1978. Whereas measles elimination was verified in 2015, multiple chains of measles transmission were observed in 2016. We aimed to reconstruct the age-dependent susceptibility to measles in Japan so that future vaccination strategies can be elucidated. An epidemiological model was used to quantify the age-dependent immune fraction using datasets of vaccination coverage and seroepidemiological survey. The second dose was interpreted in two different scenarios, i.e., booster and random shots. The effective reproduction number, the average number of secondary cases generated by a single infected individual, and the age at infection were explored using the age-dependent transmission model and the next generation matrix. While the herd immunity threshold of measles likely ranges from 90% to 95%, assuming that the basic reproductive number ranges from 10 to 20, the estimated immune fraction in Japan was below those thresholds in 2016, despite the fact that the estimates were above 80% for all ages. If the second dose completely acted as the booster shot, a proportion immune above 90% was achieved only among those aged 5years or below in 2016. Alternatively, if the second dose was randomly distributed regardless of primary vaccination status, a proportion immune over 90% was achieved among those aged below 25years. The effective reproduction number was estimated to range from 1.50 to 3.01 and from 1.50 to 3.00, respectively, for scenarios 1 and 2 in 2016; if the current vaccination schedule were continued, the reproduction number is projected to range from 1.50 to 3.01 and 1.39 to 2.78, respectively, in 2025. Japan continues to be prone to imported cases of measles. Supplementary vaccination among adults aged 20-49years would be effective if the chains of transmission continue to be observed in that age group. Copyright © 2017 Elsevier Ltd. All rights reserved.
Slowdowns in diversification rates from real phylogenies may not be real.
Cusimano, Natalie; Renner, Susanne S
2010-07-01
Studies of diversification patterns often find a slowing in lineage accumulation toward the present. This seemingly pervasive pattern of rate downturns has been taken as evidence for adaptive radiations, density-dependent regulation, and metacommunity species interactions. The significance of rate downturns is evaluated with statistical tests (the gamma statistic and Monte Carlo constant rates (MCCR) test; birth-death likelihood models and Akaike Information Criterion [AIC] scores) that rely on null distributions, which assume that the included species are a random sample of the entire clade. Sampling in real phylogenies, however, often is nonrandom because systematists try to include early-diverging species or representatives of previous intrataxon classifications. We studied the effects of biased sampling, structured sampling, and random sampling by experimentally pruning simulated trees (60 and 150 species) as well as a completely sampled empirical tree (58 species) and then applying the gamma statistic/MCCR test and birth-death likelihood models/AIC scores to assess rate changes. For trees with random species sampling, the true model (i.e., the one fitting the complete phylogenies) could be inferred in most cases. Oversampling deep nodes, however, strongly biases inferences toward downturns, with simulations of structured and biased sampling suggesting that this occurs when sampling percentages drop below 80%. The magnitude of the effect and the sensitivity of diversification rate models is such that a useful rule of thumb may be not to infer rate downturns from real trees unless they have >80% species sampling.
Rosell-Murphy, Magdalena; Bonet-Simó, Josep M; Baena, Esther; Prieto, Gemma; Bellerino, Eva; Solé, Francesc; Rubio, Montserrat; Krier, Ilona; Torres, Pascuala; Mimoso, Sonia
2014-03-25
Despite the existence of formal professional support services, informal support (mainly family members) continues to be the main source of eldercare, especially for those who are dependent or disabled. Professionals on the primary health care are the ideal choice to educate, provide psychological support, and help to mobilize social resources available to the informal caregiver.Controversy remains concerning the efficiency of multiple interventions, taking a holistic approach to both the patient and caregiver, and optimum utilization of the available community resources. .For this reason our goal is to assess whether an intervention designed to improve the social support for caregivers effectively decreases caregivers burden and improves their quality of life. CONTROLled, multicentre, community intervention trial, with patients and their caregivers randomized to the intervention or control group according to their assigned Primary Health Care Team (PHCT). Primary Health Care network (9 PHCTs). Primary informal caregivers of patients receiving home health care from participating PHCTs. Required sample size is 282 caregivers (141 from PHCTs randomized to the intervention group and 141 from PHCTs randomized to the control group. a) PHCT professionals: standardized training to implement caregivers intervention. b) Caregivers: 1 individualized counselling session, 1 family session, and 4 educational group sessions conducted by participating PHCT professionals; in addition to usual home health care visits, periodic telephone follow-up contact and unlimited telephone support. Caregivers and dependent patients: usual home health care, consisting of bimonthly scheduled visits, follow-up as needed, and additional attention upon request.Data analysisDependent variables: Caregiver burden (short-form Zarit test), caregivers' social support (Medical Outcomes Study), and caregivers' reported quality of life (SF-12)INDEPENDENT VARIABLES: a) Caregiver: sociodemographic data, Goldberg Scale, Apgar family questionnaire, Holmes and Rahe Psychosocial Stress Scale, number of chronic diseases. b) Dependent patient: sociodemographic data, level of dependency (Barthel Index), cognitive impairment (Pfeiffer test). If the intervention intended to improve social and family support is effective in reducing the burden on primary informal caregivers of dependent patients, this model can be readily applied throughout usual PHCT clinical practice. Clinical trials registrar: NCT02065427.
2014-01-01
Background Despite the existence of formal professional support services, informal support (mainly family members) continues to be the main source of eldercare, especially for those who are dependent or disabled. Professionals on the primary health care are the ideal choice to educate, provide psychological support, and help to mobilize social resources available to the informal caregiver. Controversy remains concerning the efficiency of multiple interventions, taking a holistic approach to both the patient and caregiver, and optimum utilization of the available community resources. .For this reason our goal is to assess whether an intervention designed to improve the social support for caregivers effectively decreases caregivers burden and improves their quality of life. Methods/design Design: Controlled, multicentre, community intervention trial, with patients and their caregivers randomized to the intervention or control group according to their assigned Primary Health Care Team (PHCT). Study area: Primary Health Care network (9 PHCTs). Study participants: Primary informal caregivers of patients receiving home health care from participating PHCTs. Sample: Required sample size is 282 caregivers (141 from PHCTs randomized to the intervention group and 141 from PHCTs randomized to the control group. Intervention: a) PHCT professionals: standardized training to implement caregivers intervention. b) Caregivers: 1 individualized counselling session, 1 family session, and 4 educational group sessions conducted by participating PHCT professionals; in addition to usual home health care visits, periodic telephone follow-up contact and unlimited telephone support. Control: Caregivers and dependent patients: usual home health care, consisting of bimonthly scheduled visits, follow-up as needed, and additional attention upon request. Data analysis Dependent variables: Caregiver burden (short-form Zarit test), caregivers’ social support (Medical Outcomes Study), and caregivers’ reported quality of life (SF-12) Independent variables: a) Caregiver: sociodemographic data, Goldberg Scale, Apgar family questionnaire, Holmes and Rahe Psychosocial Stress Scale, number of chronic diseases. b) Dependent patient: sociodemographic data, level of dependency (Barthel Index), cognitive impairment (Pfeiffer test). Discussion If the intervention intended to improve social and family support is effective in reducing the burden on primary informal caregivers of dependent patients, this model can be readily applied throughout usual PHCT clinical practice. Trial registration Clinical trials registrar: NCT02065427 PMID:24666438
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Random Resistor Network Model of Minimal Conductivity in Graphene
NASA Astrophysics Data System (ADS)
Cheianov, Vadim V.; Fal'Ko, Vladimir I.; Altshuler, Boris L.; Aleiner, Igor L.
2007-10-01
Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.
Magee, Joshua C; Lewis, Daniel F; Winhusen, Theresa
2016-05-01
Smoking is highly prevalent in substance dependence, but smoking-cessation treatment (SCT) is more challenging in this population. To increase the success of smoking cessation services, it is important to understand potential therapeutic targets like nicotine craving that have meaningful but highly variable relationships with smoking outcomes. This study characterized the presence, magnitude, and specificity of nicotine craving as a mediator of the relationship between SCT and smoking abstinence in the context of stimulant-dependence treatment. This study was a secondary analysis of a randomized, 10-week trial conducted at 12 outpatient SUD treatment programs. Adults with cocaine and/or methamphetamine dependence (N = 538) were randomized to SUD treatment as usual (TAU) or TAU+SCT. Participants reported nicotine craving, nicotine withdrawal symptoms, and substance use in the week following a uniform quit attempt of the TAU+SCT group, and self-reported smoking 7-day point prevalence abstinence (verified by carbon monoxide) at end-of-treatment. Bootstrapped regression models indicated that, as expected, nicotine craving following a quit attempt mediated the relationship between SCT and end-of-treatment smoking point prevalence abstinence (mediation effect = 0.09, 95% CI = 0.04% to 0.14%, P < .05, 14% of total effect). Nicotine withdrawal symptoms and substance use were not significant mediators (Ps > .05, <1% of total effect). This pattern held for separate examinations of cocaine and methamphetamine dependence. Nicotine craving accounts for a small but meaningful portion of the relationship between smoking-cessation treatment and smoking abstinence during SUD treatment. Nicotine craving following a quit attempt may be a useful therapeutic target for increasing the effectiveness of smoking-cessation treatment in substance dependence. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
McKay, James R.; Lynch, Kevin G.; Coviello, Donna; Morrison, Rebecca; Cary, Mark S.; Skalina, Lauren; Plebani, Jennifer
2010-01-01
Objective: The effects of cognitive-behavioral relapse prevention (RP), contingency management (CM), and their combination (CM + RP) were evaluated in a randomized trial with 100 cocaine-dependent patients (58% female, 89% African American) who were engaged in treatment for at least 2 weeks and had an average of 44 days of abstinence at baseline.…
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
NASA Astrophysics Data System (ADS)
Cheraghalizadeh, J.; Najafi, M. N.; Dashti-Naserabadi, H.; Mohammadzadeh, H.
2017-11-01
The self-organized criticality on the random fractal networks has many motivations, like the movement pattern of fluid in the porous media. In addition to the randomness, introducing correlation between the neighboring portions of the porous media has some nontrivial effects. In this paper, we consider the Ising-like interactions between the active sites as the simplest method to bring correlations in the porous media, and we investigate the statistics of the BTW model in it. These correlations are controlled by the artificial "temperature" T and the sign of the Ising coupling. Based on our numerical results, we propose that at the Ising critical temperature Tc the model is compatible with the universality class of two-dimensional (2D) self-avoiding walk (SAW). Especially the fractal dimension of the loops, which are defined as the external frontier of the avalanches, is very close to DfSAW=4/3 . Also, the corresponding open curves has conformal invariance with the root-mean-square distance Rrms˜t3 /4 (t being the parametrization of the curve) in accordance with the 2D SAW. In the finite-size study, we observe that at T =Tc the model has some aspects compatible with the 2D BTW model (e.g., the 1 /log(L ) -dependence of the exponents of the distribution functions) and some in accordance with the Ising model (e.g., the 1 /L -dependence of the fractal dimensions). The finite-size scaling theory is tested and shown to be fulfilled for all statistical observables in T =Tc . In the off-critical temperatures in the close vicinity of Tc the exponents show some additional power-law behaviors in terms of T -Tc with some exponents that are reported in the text. The spanning cluster probability at the critical temperature also scales with L1/2, which is different from the regular 2D BTW model.
Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.
Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.
Groth, Detlef
2017-04-01
Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later, of the whole network increased by up to 0.1 cm per iteration depending on the network model. The general increase in height within the network depended on connectedness and on the amount of height information that was exchanged between neighboring districts. If higher amounts of neighborhood height information were exchanged, the general increase in height within the network was large (strong secular trend). The trend in the homogeneous fishnet like network was lowest, the trend in the random network was highest. Yet, some network properties, such as the heteroscedasticity and autocorrelations of the migration simulation models differed greatly from the natural features observed in Swiss military conscript networks. Autocorrelations of district heights for instance, were much higher in the migration models. Conclusion: This study confirmed that secular height trends can be modeled by preferred migration of tall individuals into network hubs. However, basic network properties of the migration simulation models differed greatly from the natural features observed in Swiss military conscripts. Similar network-based data from other countries should be explored to better investigate height trends with Monte Carlo migration approach.
Shirazi, M; Zeinaloo, A A; Parikh, S V; Sadeghi, M; Taghva, A; Arbabi, M; Kashani, A Sabouri; Alaeddini, F; Lonka, K; Wahlström, R
2008-04-01
The Prochaska model of readiness to change has been proposed to be used in educational interventions to improve medical care. To evaluate the impact on readiness to change of an educational intervention on management of depressive disorders based on a modified version of the Prochaska model in comparison with a standard programme of continuing medical education (CME). This is a randomized controlled trial within primary care practices in southern Tehran, Iran. The participants included 192 general physicians working in primary care (GPs) were recruited after random selection and randomized to intervention (96) and control (96). Intervention consisted of interactive, learner-centred educational methods in large and small group settings depending on the GPs' stages of readiness to change. Change in stage of readiness to change measured by the modified version of the Prochaska questionnaire was the The final number of participants was 78 (81%) in the intervention arm and 81 (84%) in the control arm. Significantly (P < 0.01), more GPs (57/96 = 59% versus 12/96 = 12%) in the intervention group changed to higher stages of readiness to change. The intervention effect was 46% points (P < 0.001) and 50% points (P < 0.001) in the large and small group setting, respectively. Educational formats that suit different stages of learning can support primary care doctors to reach higher stages of behavioural change in the topic of depressive disorders. Our findings have practical implications for conducting CME programmes in Iran and are possibly also applicable in other parts of the world.
Phase diagram for the Kuramoto model with van Hemmen interactions.
Kloumann, Isabel M; Lizarraga, Ian M; Strogatz, Steven H
2014-01-01
We consider a Kuramoto model of coupled oscillators that includes quenched random interactions of the type used by van Hemmen in his model of spin glasses. The phase diagram is obtained analytically for the case of zero noise and a Lorentzian distribution of the oscillators' natural frequencies. Depending on the size of the attractive and random coupling terms, the system displays four states: complete incoherence, partial synchronization, partial antiphase synchronization, and a mix of antiphase and ordinary synchronization.
Stasiewicz, Paul R.; Brandon, Thomas H.; Bradizza, Clara M.
2013-01-01
Pavlovian conditioning models have led to cue-exposure treatments for drug abuse. However, conditioned responding to drug stimuli can return (be renewed) following treatment. Animal research and a previous study of social drinkers indicated that extinction is highly context dependent but that renewal could be reduced by the inclusion of a cue from the extinction context. This study extends this research to a clinical sample. Alcohol-dependent outpatients (N = 143) completed an extinction trial to reduce craving and salivation responses to alcohol cues. They were then randomized to renewal tests in either the same context as extinction, a different context, the different context containing an extinction cue, or the different context with cue plus a manipulation to increase the salience of the cue. Contrary to predictions, the different context did not produce the expected renewal effect. Although the generalization of extinction effects beyond the cue-exposure context is a positive clinical finding, it is inconsistent with basic research findings on the context dependence of extinction. Possible explanations for this inconsistency are discussed. PMID:17563145
Stasiewicz, Paul R; Brandon, Thomas H; Bradizza, Clara M
2007-06-01
Pavlovian conditioning models have led to cue-exposure treatments for drug abuse. However, conditioned responding to drug stimuli can return (be renewed) following treatment. Animal research and a previous study of social drinkers indicated that extinction is highly context dependent but that renewal could be reduced by the inclusion of a cue from the extinction context. This study extends this research to a clinical sample. Alcohol-dependent outpatients (N = 143) completed an extinction trial to reduce craving and salivation responses to alcohol cues. They were then randomized to renewal tests in either the same context as extinction, a different context, the different context containing an extinction cue, or the different context with cue plus a manipulation to increase the salience of the cue. Contrary to predictions, the different context did not produce the expected renewal effect. Although the generalization of extinction effects beyond the cue-exposure context is a positive clinical finding, it is inconsistent with basic research findings on the context dependence of extinction. Possible explanations for this inconsistency are discussed.
NASA Astrophysics Data System (ADS)
Sombun, S.; Steinheimer, J.; Herold, C.; Limphirat, A.; Yan, Y.; Bleicher, M.
2018-02-01
We study the dependence of the normalized moments of the net-proton multiplicity distributions on the definition of centrality in relativistic nuclear collisions at a beam energy of \\sqrt{{s}{NN}}=7.7 {GeV}. Using the ultra relativistic quantum molecular dynamics model as event generator we find that the centrality definition has a large effect on the extracted cumulant ratios. Furthermore we find that the finite efficiency for the determination of the centrality introduces an additional systematic uncertainty. Finally, we quantitatively investigate the effects of event-pile up and other possible spurious effects which may change the measured proton number. We find that pile-up alone is not sufficient to describe the data and show that a random double counting of events, adding significantly to the measured proton number, affects mainly the higher order cumulants in most central collisions.
Carrieri, Patrizia Maria; Michel, Laurent; Lions, Caroline; Cohen, Julien; Vray, Muriel; Mora, Marion; Marcellin, Fabienne; Spire, Bruno; Morel, Alain; Roux, Perrine
2014-01-01
Methadone coverage is poor in many countries due in part to methadone induction being possible only in specialized care (SC). This multicenter pragmatic trial compared the effectiveness of methadone treatment between two induction models: primary care (PC) and SC. In this study, registered at ClinicalTrials.Gov (NCT00657397), opioid-dependent individuals not on methadone treatment for at least one month or receiving buprenorphine but needing to switch were randomly assigned to start methadone in PC (N = 155) or in SC (N = 66) in 10 sites in France. Visits were scheduled at months M0, M3, M6 and M12. The primary outcome was self-reported abstinence from street-opioids at 12 months (M12) (with an underlying 15% non-inferiority hypothesis for PC). Secondary outcomes were abstinence during follow-up, engagement in treatment (i.e. completing the induction period), retention and satisfaction with the explanations provided by the physician. Primary analysis used intention to treat (ITT). Mixed models and the log-rank test were used to assess the arm effect (PC vs. SC) on the course of abstinence and retention, respectively. In the ITT analysis (n = 155 in PC, 66 in SC), which compared the proportions of street-opioid abstinent participants, 85/155 (55%) and 22/66 (33%) of the participants were classified as street-opioid abstinent at M12 in PC and SC, respectively. This ITT analysis showed the non-inferiority of PC (21.5 [7.7; 35.3]). Engagement in treatment and satisfaction with the explanations provided by the physician were significantly higher in PC than SC. Retention in methadone and abstinence during follow-up were comparable in both arms (p = 0.47, p = 0.39, respectively). Under appropriate conditions, methadone induction in primary care is feasible and acceptable to both physicians and patients. It is as effective as induction in specialized care in reducing street-opioid use and ensuring engagement and retention in treatment for opioid dependence. Number Eudract 2008-001338-28; ClinicalTrials.gov: NCT00657397; International Standard Randomized Controlled Trial Number Register ISRCTN31125511.
Murphy, Philip N; Erwin, Philip G; Maciver, Linda; Fisk, John E; Larkin, Derek; Wareing, Michelle; Montgomery, Catharine; Hilton, Joanne; Tames, Frank J; Bradley, Belinda; Yanulevitch, Kate; Ralley, Richard
2011-10-01
This study aimed to examine the relationship between the consumption of ecstasy (3,4-methylenedioxymethamphetamine (MDMA)) and cannabis, and performance on the random letter generation task which generates dependent variables drawing upon executive inhibition and access to semantic long-term memory (LTM). The participant group was a between-participant independent variable with users of both ecstasy and cannabis (E/C group, n = 15), users of cannabis but not ecstasy (CA group, n = 13) and controls with no exposure to these drugs (CO group, n = 12). Dependent variables measured violations of randomness: number of repeat sequences, number of alphabetical sequences (both drawing upon inhibition) and redundancy (drawing upon access to semantic LTM). E/C participants showed significantly higher redundancy than CO participants but did not differ from CA participants. There were no significant effects for the other dependent variables. A regression model comprising intelligence measures and estimates of ecstasy and cannabis consumption predicted redundancy scores, but only cannabis consumption contributed significantly to this prediction. Impaired access to semantic LTM may be related to cannabis consumption, although the involvement of ecstasy and other stimulant drugs cannot be excluded here. Executive inhibitory functioning, as measured by the random letter generation task, is unrelated to ecstasy and cannabis consumption. Copyright © 2011 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
Modeling pattern in collections of parameters
Link, W.A.
1999-01-01
Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.
NASA Astrophysics Data System (ADS)
Chen, C. W.; Chung, H. Y.; Chiang, H.-P.; Lu, J. Y.; Chang, R.; Tsai, D. P.; Leung, P. T.
2010-10-01
The optical properties of composites with metallic nanoparticles are studied, taking into account the effects due to the nonlocal dielectric response of the metal and the coalescing of the particles to form clusters. An approach based on various effective medium theories is followed, and the modeling results are compared with those from the cases with local response and particles randomly distributed through the host medium. Possible observations of our modeling results are illustrated via a calculation of the transmission of light through a thin film made of these materials. It is found that the nonlocal effects are particularly significant when the particles coalesce, leading to blue-shifted resonances and slightly lower values in the dielectric functions. The dependence of these effects on the volume fraction and fractal dimension of the metal clusters is studied in detail.
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Wood, Scott J.; Jain, Varsha
2008-01-01
Astronauts show degraded balance control immediately after spaceflight. To assess this change, astronauts' ability to maintain a fixed stance under several challenging stimuli on a movable platform is quantified by "equilibrium" scores (EQs) on a scale of 0 to 100, where 100 represents perfect control (sway angle of 0) and 0 represents data loss where no sway angle is observed because the subject has to be restrained from falling. By comparing post- to pre-flight EQs for actual astronauts vs. controls, we built a classifier for deciding when an astronaut has recovered. Future diagnostic performance depends both on the sampling distribution of the classifier as well as the distribution of its input data. Taking this into consideration, we constructed a predictive ROC by simulation after modeling P(EQ = 0) in terms of a latent EQ-like beta-distributed random variable with random effects.
Assessment of wear dependence parameters in complex model of cutting tool wear
NASA Astrophysics Data System (ADS)
Antsev, A. V.; Pasko, N. I.; Antseva, N. V.
2018-03-01
This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.
Qin, Lan
2016-01-01
Objective: This meta-analysis was performed to compare radioiodine therapy with antithyroid drugs in terms of clinical outcomes, including development or worsening of ophthalmopathy, hyperthyroid cure rate, hypothyroidism, relapse rate and adverse events. Methods: Randomized controlled trials (RCTs) published in PubMed, Embase, Web of Science, SinoMed and National Knowledge Infrastructure, China, were systematically reviewed to compare the effects of radioiodine therapy with antithyroid drugs in patients with Graves' disease. Results were expressed as risk ratio with 95% confidence intervals (CIs) and weighted mean differences with 95% CIs. Pooled estimates were performed using a fixed-effects model or random-effects model, depending on the heterogeneity among studies. Results: 17 RCTs involving 4024 patients met the inclusion criteria and were included. Results showed that radioiodine treatment has increased risk in new ophthalmopathy, development or worsening of ophthalmopathy and hypothyroidism. Whereas, compared with antithyroid drugs, radioiodine treatment seems to have a higher hyperthyroid cure rate, lower recurrence rate and lower incidence of adverse events. Conclusion: Radioiodine therapy is associated with a higher hyperthyroid cure rate and lower relapse rate compared with antithyroid drugs. However, it also increases the risk of ophthalmopathy and hypothyroidism. Advances in knowledge: Considering that antithyroid drug treatment can be associated with unsatisfactory control of hyperthyroidism, we would recommend radioiodine therapy as the treatment of choice for patients with Graves' disease. PMID:27266544
Conditions for the Emergence of Shared Norms in Populations with Incompatible Preferences
Helbing, Dirk; Yu, Wenjian; Opp, Karl-Dieter; Rauhut, Heiko
2014-01-01
Understanding norms is a key challenge in sociology. Nevertheless, there is a lack of dynamical models explaining how one of several possible behaviors is established as a norm and under what conditions. Analysing an agent-based model, we identify interesting parameter dependencies that imply when two behaviors will coexist or when a shared norm will emerge in a heterogeneous society, where different populations have incompatible preferences. Our model highlights the importance of randomness, spatial interactions, non-linear dynamics, and self-organization. It can also explain the emergence of unpopular norms that do not maximize the collective benefit. Furthermore, we compare behavior-based with preference-based punishment and find interesting results concerning hypocritical punishment. Strikingly, pressuring others to perform the same public behavior as oneself is more effective in promoting norms than pressuring others to meet one’s own private preference. Finally, we show that adaptive group pressure exerted by randomly occuring, local majorities may create norms under conditions where different behaviors would normally coexist. PMID:25166137
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
Benford's law and continuous dependent random variables
NASA Astrophysics Data System (ADS)
Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine
2018-01-01
Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.
Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Cudeck, Robert
2009-01-01
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…
Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-01-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508
NASA Astrophysics Data System (ADS)
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Computational Study of the Blood Flow in Three Types of 3D Hollow Fiber Membrane Bundles
Zhang, Jiafeng; Chen, Xiaobing; Ding, Jun; Fraser, Katharine H.; Ertan Taskin, M.; Griffith, Bartley P.; Wu, Zhongjun J.
2013-01-01
The goal of this study is to develop a computational fluid dynamics (CFD) modeling approach to better estimate the blood flow dynamics in the bundles of the hollow fiber membrane based medical devices (i.e., blood oxygenators, artificial lungs, and hemodialyzers). Three representative types of arrays, square, diagonal, and random with the porosity value of 0.55, were studied. In addition, a 3D array with the same porosity was studied. The flow fields between the individual fibers in these arrays at selected Reynolds numbers (Re) were simulated with CFD modeling. Hemolysis is not significant in the fiber bundles but the platelet activation may be essential. For each type of array, the average wall shear stress is linearly proportional to the Re. For the same Re but different arrays, the average wall shear stress also exhibits a linear dependency on the pressure difference across arrays, while Darcy′s law prescribes a power-law relationship, therefore, underestimating the shear stress level. For the same Re, the average wall shear stress of the diagonal array is approximately 3.1, 1.8, and 2.0 times larger than that of the square, random, and 3D arrays, respectively. A coefficient C is suggested to correlate the CFD predicted data with the analytical solution, and C is 1.16, 1.51, and 2.05 for the square, random, and diagonal arrays in this paper, respectively. It is worth noting that C is strongly dependent on the array geometrical properties, whereas it is weakly dependent on the flow field. Additionally, the 3D fiber bundle simulation results show that the three-dimensional effect is not negligible. Specifically, velocity and shear stress distribution can vary significantly along the fiber axial direction. PMID:24141394
Universal statistics of vortex tangles in three-dimensional random waves
NASA Astrophysics Data System (ADS)
Taylor, Alexander J.
2018-02-01
The tangled nodal lines (wave vortices) in random, three-dimensional wavefields are studied as an exemplar of a fractal loop soup. Their statistics are a three-dimensional counterpart to the characteristic random behaviour of nodal domains in quantum chaos, but in three dimensions the filaments can wind around one another to give distinctly different large scale behaviours. By tracing numerically the structure of the vortices, their conformations are shown to follow recent analytical predictions for random vortex tangles with periodic boundaries, where the local disorder of the model ‘averages out’ to produce large scale power law scaling relations whose universality classes do not depend on the local physics. These results explain previous numerical measurements in terms of an explicit effect of the periodic boundaries, where the statistics of the vortices are strongly affected by the large scale connectedness of the system even at arbitrarily high energies. The statistics are investigated primarily for static (monochromatic) wavefields, but the analytical results are further shown to directly describe the reconnection statistics of vortices evolving in certain dynamic systems, or occurring during random perturbations of the static configuration.
How preview space/time translates into preview cost/benefit for fixation durations during reading.
Kliegl, Reinhold; Hohenstein, Sven; Yan, Ming; McDonald, Scott A
2013-01-01
Eye-movement control during reading depends on foveal and parafoveal information. If the parafoveal preview of the next word is suppressed, reading is less efficient. A linear mixed model (LMM) reanalysis of McDonald (2006) confirmed his observation that preview benefit may be limited to parafoveal words that have been selected as the saccade target. Going beyond the original analyses, in the same LMM, we examined how the preview effect (i.e., the difference in single-fixation duration, SFD, between random-letter and identical preview) depends on the gaze duration on the pretarget word and on the amplitude of the saccade moving the eye onto the target word. There were two key results: (a) The shorter the saccade amplitude (i.e., the larger preview space), the shorter a subsequent SFD with an identical preview; this association was not observed with a random-letter preview. (b) However, the longer the gaze duration on the pretarget word, the longer the subsequent SFD on the target, with the difference between random-letter string and identical previews increasing with preview time. A third pattern-increasing cost of a random-letter string in the parafovea associated with shorter saccade amplitudes-was observed for target gaze durations. Thus, LMMs revealed that preview effects, which are typically summarized under "preview benefit", are a complex mixture of preview cost and preview benefit and vary with preview space and preview time. The consequence for reading is that parafoveal preview may not only facilitate, but also interfere with lexical access.
Computer-Assisted Dieting: Effects of a Randomized Nutrition Intervention
ERIC Educational Resources Information Center
Schroder, Kerstin E. E.
2011-01-01
Objectives: To compare the effects of a computer-assisted dieting intervention (CAD) with and without self-management training on dieting among 55 overweight and obese adults. Methods: Random assignment to a single-session nutrition intervention (CAD-only) or a combined CAD plus self-management group intervention (CADG). Dependent variables were…
Sequential change detection and monitoring of temporal trends in random-effects meta-analysis.
Dogo, Samson Henry; Clark, Allan; Kulinskaya, Elena
2017-06-01
Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter τ 2 . In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
Multi-Cellular Logistics of Collective Cell Migration
Yamao, Masataka; Naoki, Honda; Ishii, Shin
2011-01-01
During development, the formation of biological networks (such as organs and neuronal networks) is controlled by multicellular transportation phenomena based on cell migration. In multi-cellular systems, cellular locomotion is restricted by physical interactions with other cells in a crowded space, similar to passengers pushing others out of their way on a packed train. The motion of individual cells is intrinsically stochastic and may be viewed as a type of random walk. However, this walk takes place in a noisy environment because the cell interacts with its randomly moving neighbors. Despite this randomness and complexity, development is highly orchestrated and precisely regulated, following genetic (and even epigenetic) blueprints. Although individual cell migration has long been studied, the manner in which stochasticity affects multi-cellular transportation within the precisely controlled process of development remains largely unknown. To explore the general principles underlying multicellular migration, we focus on the migration of neural crest cells, which migrate collectively and form streams. We introduce a mechanical model of multi-cellular migration. Simulations based on the model show that the migration mode depends on the relative strengths of the noise from migratory and non-migratory cells. Strong noise from migratory cells and weak noise from surrounding cells causes “collective migration,” whereas strong noise from non-migratory cells causes “dispersive migration.” Moreover, our theoretical analyses reveal that migratory cells attract each other over long distances, even without direct mechanical contacts. This effective interaction depends on the stochasticity of the migratory and non-migratory cells. On the basis of these findings, we propose that stochastic behavior at the single-cell level works effectively and precisely to achieve collective migration in multi-cellular systems. PMID:22205934
NASA Astrophysics Data System (ADS)
Sathyan, Dhanya; Anand, K. B.; Jose, Chinnu; Aravind, N. R.
2018-02-01
Super plasticizers(SPs) are added to the concrete to improve its workability with out changing the water cement ratio. Property of fresh concrete is mainly governed by the cement paste which depends on the dispersion of cement particle. Cement dispersive properties of the SP depends up on its dosage and the family. Mini slump spread diameter with different dosages and families of SP is taken as the measure of workability characteristic of cement paste chosen for measuring the rheological properties of cement paste. The main purpose of this study includes measure the dispersive ability of different families of SP by conducting minislump test and model the minislump spread diameter of the super plasticized Portland Pozzolona Cement (PPC)paste using regularized least square (RLS) approach along with the application of Random kitchen sink (RKS) algorithm. For preparing test and training data for the model 287 different mixes were prepared in the laboratory at a water cement ratio of 0.37 using four locally available brand of Portland Pozzolona cement (PPC) and SP belonging to four different families. Water content, cement weight and amount of SP (by considering it as seven separate input based on their family and brand) were the input parameters and mini slump spread diameter was the output parameter for the model. The variation of predicted and measured values of spread diameters were compared and validated. From this study it was observed that, the model could effectively predict the minislump spread of cement paste
NASA Astrophysics Data System (ADS)
Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.
2016-12-01
The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.
Robustness of spatial micronetworks
NASA Astrophysics Data System (ADS)
McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.
2015-04-01
Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.
Theory of Random Copolymer Fractionation in Columns
NASA Astrophysics Data System (ADS)
Enders, Sabine
Random copolymers show polydispersity both with respect to molecular weight and with respect to chemical composition, where the physical and chemical properties depend on both polydispersities. For special applications, the two-dimensional distribution function must adjusted to the application purpose. The adjustment can be achieved by polymer fractionation. From the thermodynamic point of view, the distribution function can be adjusted by the successive establishment of liquid-liquid equilibria (LLE) for suitable solutions of the polymer to be fractionated. The fractionation column is divided into theoretical stages. Assuming an LLE on each theoretical stage, the polymer fractionation can be modeled using phase equilibrium thermodynamics. As examples, simulations of stepwise fractionation in one direction, cross-fractionation in two directions, and two different column fractionations (Baker-Williams fractionation and continuous polymer fractionation) have been investigated. The simulation delivers the distribution according the molecular weight and chemical composition in every obtained fraction, depending on the operative properties, and is able to optimize the fractionation effectively.
Wave Propagation in Non-Stationary Statistical Mantle Models at the Global Scale
NASA Astrophysics Data System (ADS)
Meschede, M.; Romanowicz, B. A.
2014-12-01
We study the effect of statistically distributed heterogeneities that are smaller than the resolution of current tomographic models on seismic waves that propagate through the Earth's mantle at teleseismic distances. Current global tomographic models are missing small-scale structure as evidenced by the failure of even accurate numerical synthetics to explain enhanced coda in observed body and surface waveforms. One way to characterize small scale heterogeneity is to construct random models and confront observed coda waveforms with predictions from these models. Statistical studies of the coda typically rely on models with simplified isotropic and stationary correlation functions in Cartesian geometries. We show how to construct more complex random models for the mantle that can account for arbitrary non-stationary and anisotropic correlation functions as well as for complex geometries. Although this method is computationally heavy, model characteristics such as translational, cylindrical or spherical symmetries can be used to greatly reduce the complexity such that this method becomes practical. With this approach, we can create 3D models of the full spherical Earth that can be radially anisotropic, i.e. with different horizontal and radial correlation functions, and radially non-stationary, i.e. with radially varying model power and correlation functions. Both of these features are crucial for a statistical description of the mantle in which structure depends to first order on the spherical geometry of the Earth. We combine different random model realizations of S velocity with current global tomographic models that are robust at long wavelengths (e.g. Meschede and Romanowicz, 2014, GJI submitted), and compute the effects of these hybrid models on the wavefield with a spectral element code (SPECFEM3D_GLOBE). We finally analyze the resulting coda waves for our model selection and compare our computations with observations. Based on these observations, we make predictions about the strength of unresolved small-scale structure and extrinsic attenuation.
Santana, Mário L; Bignardi, Annaiza Braga; Pereira, Rodrigo Junqueira; Menéndez-Buxadera, Alberto; El Faro, Lenira
2016-02-01
The present study had the following objectives: to compare random regression models (RRM) considering the time-dependent (days in milk, DIM) and/or temperature × humidity-dependent (THI) covariate for genetic evaluation; to identify the effect of genotype by environment interaction (G×E) due to heat stress on milk yield; and to quantify the loss of milk yield due to heat stress across lactation of cows under tropical conditions. A total of 937,771 test-day records from 3603 first lactations of Brazilian Holstein cows obtained between 2007 and 2013 were analyzed. An important reduction in milk yield due to heat stress was observed for THI values above 66 (-0.23 kg/day/THI). Three phases of milk yield loss were identified during lactation, the most damaging one at the end of lactation (-0.27 kg/day/THI). Using the most complex RRM, the additive genetic variance could be altered simultaneously as a function of both DIM and THI values. This model could be recommended for the genetic evaluation taking into account the effect of G×E. The response to selection in the comfort zone (THI ≤ 66) is expected to be higher than that obtained in the heat stress zone (THI > 66) of the animals. The genetic correlations between milk yield in the comfort and heat stress zones were less than unity at opposite extremes of the environmental gradient. Thus, the best animals for milk yield in the comfort zone are not necessarily the best in the zone of heat stress and, therefore, G×E due to heat stress should not be neglected in the genetic evaluation.
Random diffusion and leverage effect in financial markets.
Perelló, Josep; Masoliver, Jaume
2003-03-01
We prove that Brownian market models with random diffusion coefficients provide an exact measure of the leverage effect [J-P. Bouchaud et al., Phys. Rev. Lett. 87, 228701 (2001)]. This empirical fact asserts that past returns are anticorrelated with future diffusion coefficient. Several models with random diffusion have been suggested but without a quantitative study of the leverage effect. Our analysis lets us to fully estimate all parameters involved and allows a deeper study of correlated random diffusion models that may have practical implications for many aspects of financial markets.
Niël-Weise, Barbara S; Stijnen, Theo; van den Broek, Peterhans J
2010-06-01
In this systematic review, we assessed the effect of in-line filters on infusion-related phlebitis associated with peripheral IV catheters. The study was designed as a systematic review and meta-analysis of randomized controlled trials. We used MEDLINE and the Cochrane Controlled Trial Register up to August 10, 2009. Two reviewers independently assessed trial quality and extracted data. Data on phlebitis were combined when appropriate, using a random-effects model. The impact of the risk of phlebitis in the control group (baseline risk) on the effect of in-line filters was studied by using meta-regression based on the bivariate meta-analysis model. The quality of the evidence was determined by using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) method. Eleven trials (1633 peripheral catheters) were included in this review to compare the effect of in-line filters on the incidence of phlebitis in hospitalized patients. Baseline risks across trials ranged from 23% to 96%. Meta-analysis of all trials showed that in-line filters reduced the risk of infusion-related phlebitis (relative risk, 0.66; 95% confidence interval, 0.43-1.00). This benefit, however, is very uncertain, because the trials had serious methodological shortcomings and meta-analysis revealed marked unexplained statistical heterogeneity (P < 0.0000, I(2) = 90.4%). The estimated benefit did not depend on baseline risk. In-line filters in peripheral IV catheters cannot be recommended routinely, because evidence of their benefit is uncertain.
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Johnson, R. A.; Wehrly, T.
1976-01-01
Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.
Adam, Mary B.
2014-01-01
We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. PMID:24957544
Adam, Mary B
2014-09-01
We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. © The American Society of Tropical Medicine and Hygiene.
Effective dynamics of a random walker on a heterogeneous ring: Exact results
NASA Astrophysics Data System (ADS)
Masharian, S. R.
2018-07-01
In this paper, by considering a biased random walker hopping on a one-dimensional lattice with a ring geometry, we investigate the fluctuations of the speed of the random walker. We assume that the lattice is heterogeneous i.e. the hopping rate of the random walker between the first and the last lattice sites is different from the hopping rate of the random walker between the other links of the lattice. Assuming that the average speed of the random walker in the steady-state is v∗, we have been able to find the unconditional effective dynamics of the random walker where the absolute value of the average speed of the random walker is -v∗. Using a perturbative method in the large system-size limit, we have also been able to show that the effective hopping rates of the random walker near the defective link are highly site-dependent.
Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity.
Kuersteiner, Guido M; Prucha, Ingmar R
2013-06-01
The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n . The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.
Generated effect modifiers (GEM's) in randomized clinical trials.
Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R Todd
2017-01-01
In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an "effect modifier". Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Baumgaertner, A.
2016-07-01
We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.
NASA Astrophysics Data System (ADS)
Zulvia, Pepi; Kurnia, Anang; Soleh, Agus M.
2017-03-01
Individual and environment are a hierarchical structure consist of units grouped at different levels. Hierarchical data structures are analyzed based on several levels, with the lowest level nested in the highest level. This modeling is commonly call multilevel modeling. Multilevel modeling is widely used in education research, for example, the average score of National Examination (UN). While in Indonesia UN for high school student is divided into natural science and social science. The purpose of this research is to develop multilevel and panel data modeling using linear mixed model on educational data. The first step is data exploration and identification relationships between independent and dependent variable by checking correlation coefficient and variance inflation factor (VIF). Furthermore, we use a simple model approach with highest level of the hierarchy (level-2) is regency/city while school is the lowest of hierarchy (level-1). The best model was determined by comparing goodness-of-fit and checking assumption from residual plots and predictions for each model. Our finding that for natural science and social science, the regression with random effects of regency/city and fixed effects of the time i.e multilevel model has better performance than the linear mixed model in explaining the variability of the dependent variable, which is the average scores of UN.
Design approaches to experimental mediation☆
Pirlott, Angela G.; MacKinnon, David P.
2016-01-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259
Design approaches to experimental mediation.
Pirlott, Angela G; MacKinnon, David P
2016-09-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.
Testing homogeneity in Weibull-regression models.
Bolfarine, Heleno; Valença, Dione M
2005-10-01
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.
Two-Component Structure in the Entanglement Spectrum of Highly Excited States
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Chamon, Claudio; Hamma, Alioscia; Mucciolo, Eduardo R.
2015-12-01
We study the entanglement spectrum of highly excited eigenstates of two known models that exhibit a many-body localization transition, namely the one-dimensional random-field Heisenberg model and the quantum random energy model. Our results indicate that the entanglement spectrum shows a "two-component" structure: a universal part that is associated with random matrix theory, and a nonuniversal part that is model dependent. The nonuniversal part manifests the deviation of the highly excited eigenstate from a true random state even in the thermalized phase where the eigenstate thermalization hypothesis holds. The fraction of the spectrum containing the universal part decreases as one approaches the critical point and vanishes in the localized phase in the thermodynamic limit. We use the universal part fraction to construct an order parameter for measuring the degree of randomness of a generic highly excited state, which is also a promising candidate for studying the many-body localization transition. Two toy models based on Rokhsar-Kivelson type wave functions are constructed and their entanglement spectra are shown to exhibit the same structure.
Neighborhood Structural Inequality, Collective Efficacy, and Sexual Risk Behavior among Urban Youth
BROWNING, CHRISTOPHER R.; BURRINGTON, LORI A.; LEVENTHAL, TAMA; BROOKS-GUNN, JEANNE
2011-01-01
We draw on collective efficacy theory to extend a contextual model of early adolescent sexual behavior. Specifically, we hypothesize that neighborhood structural disadvantage—as measured by levels of concentrated poverty, residential instability, and aspects of immigrant concentration—and diminished collective efficacy have consequences for the prevalence of early adolescent multiple sexual partnering. Findings from random effects multinomial logistic regression models of the number of sexual partners among a sample of youth, age 11 to 16, from the Project on Human Development in Chicago Neighborhoods (N = 768) reveal evidence of neighborhood effects on adolescent higher-risk sexual activity. Collective efficacy is negatively associated with having two or more sexual partners versus one (but not zero versus one) sexual partner. The effect of collective efficacy is dependent upon age: The regulatory effect of collective efficacy increases for older adolescents. PMID:18771063
Analysis of effective thermal conductivity of fibrous materials
NASA Technical Reports Server (NTRS)
Futschik, Michael W.; Witte, Larry C.
1993-01-01
The objective of this research is to gain a better understanding of the various mechanisms of heat transfer through fibrous materials and to gain insight into how fill-gas pressure influences the effective thermal conductivity. By way of first principles and some empiricism, two mathematical models are constructed to correlate experimental data. The data are obtained from a test series measuring the effective thermal conductivity of Nomex using a two-sided guarded hot-plate heater apparatus. Tests are conducted for certain mean temperatures and fill-gases over a range of pressures varying from vacuum to atmospheric conditions. The models are then evaluated to determine their effectiveness in representing the effective thermal conductivity of a fibrous material. The models presented herein predict the effective thermal conductivity of Nomex extremely well. Since the influence of gas conduction is determined to be the most influential component in predicting the effective thermal conductivity of a fibrous material, an improved representation of gas conduction is developed. Finally, some recommendations for extension to other random-oriented fiber materials are made concerning the usefulness of each model depending on their advantages and disadvantages.
Random attractor of non-autonomous stochastic Boussinesq lattice system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com
2015-09-15
In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.
Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J
2013-08-01
Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.
ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference.
van Breukelen, Gerard J P
2013-11-01
The pretest-posttest control group design can be analyzed with the posttest as dependent variable and the pretest as covariate (ANCOVA) or with the difference between posttest and pretest as dependent variable (CHANGE). These 2 methods can give contradictory results if groups differ at pretest, a phenomenon that is known as Lord's paradox. Literature claims that ANCOVA is preferable if treatment assignment is based on randomization or on the pretest and questionable for preexisting groups. Some literature suggests that Lord's paradox has to do with measurement error in the pretest. This article shows two new things: First, the claims are confirmed by proving the mathematical equivalence of ANCOVA to a repeated measures model without group effect at pretest. Second, correction for measurement error in the pretest is shown to lead back to ANCOVA or to CHANGE, depending on the assumed absence or presence of a true group difference at pretest. These two new theoretical results are illustrated with multilevel (mixed) regression and structural equation modeling of data from two studies.
Modeling hard clinical end-point data in economic analyses.
Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V
2013-11-01
The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (<7). Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.
Analog model for quantum gravity effects: phonons in random fluids.
Krein, G; Menezes, G; Svaiter, N F
2010-09-24
We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.
NASA Technical Reports Server (NTRS)
1981-01-01
The application of statistical methods to recorded ozone measurements. The effects of a long term depletion of ozone at magnitudes predicted by the NAS is harmful to most forms of life. Empirical prewhitening filters the derivation of which is independent of the underlying physical mechanisms were analyzed. Statistical analysis performs a checks and balances effort. Time series filters variations into systematic and random parts, errors are uncorrelated, and significant phase lag dependencies are identified. The use of time series modeling to enhance the capability of detecting trends is discussed.
PCSK9 inhibitor valuation: A science‐based review of the two recent models
Cannon, Christopher P.
2018-01-01
Low‐density lipoprotein cholesterol (LDL‐C) has been extensively evaluated. Prospective cohort studies, randomized controlled trials, biology, pathophysiology, genetics, and Mendelian randomization studies, have clearly taught us that LDL‐C causes atherosclerotic cardiovascular disease. The newest class of drugs to lower LDL‐C, the proprotein convertase subtilisin/kexin type 9 (PCSK9) monoclonal antibodies, have been found to safely reduce LDL‐C approximately 60% when added to high‐intensity statin therapy. Because their cost is much greater than that of the currently available agents, their value has been questioned. In late August, 2017, two groups assessed the value of this class of drugs looking at cost‐effectiveness; however, the Institute for Clinical and Economic Review and Fonarow and colleagues found disparate results when assessing PCSK9 valuation. Herein, we review the evolution of LDL‐C from hypothesis to fact, and then attempt to adjudicate the 2 models, shedding light on the complex modeling process. We find that models of cost‐effectiveness are helpful adjuncts to decision making, but that their conclusions depend on many assumptions. Ultimately, clinician judgment regarding their clinical benefit, balanced by some estimation of cost, may be more productive to target the right patients for whom the benefits can be well‐justified. PMID:29512936
Nonlinear Dynamics Used to Classify Effects of Mild Traumatic Brain Injury
2012-01-11
evaluate random fractal characteristics, and scale-dependent Lyapunov exponents (SDLE) to evaluate chaotic characteristics. Both Shannon and Renyi entropy...fluctuation analysis to evaluate random fractal characteristics, and scale-dependent Lyapunov exponents (SDLE) to evaluate chaotic characteristics. Both...often called the Hurst parameter [32]. When the scaling law described by Eq. (2) holds, the September 2011 I Volume 6 I Issue 9 I e24446 -Q.384
Handling Correlations between Covariates and Random Slopes in Multilevel Models
ERIC Educational Resources Information Center
Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders
2014-01-01
This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…
HOMAIE RAD, Enayatollah; HADIAN, Mohamad; GHOLAMPOOR, Hanie
2014-01-01
Abstract Background Skilled labor force is very important in economic growth. Workers become skilled when they are healthy and able to be educated and work. In this study, we estimated the effects of health indicators on labor supply. We used labor force participation rate as the indicator of labor supply. We categorized this indicator into 2 indicators of female and male labor force participation rates and compared the results of each estimate with the other. Methods This study was done in eastern Mediterranean countries between 1995 and 2011. We used a panel cointegration approach for estimating the models. We used Pesaran cross sectional dependency, Pesaran unit root test, and Westerlund panel cointegration for this issue. At the end, after confirmation of having random effect models, we estimated them with random effects. Results Increasing the fertility rate decreased the female labor supply, but increased the male labor supply. However, public health expenditures increased the female labor supply, but decreased the male labor supply because of substitution effects. Similar results were found regarding urbanization. Gross domestic product had a positive relationship with female labor supply, but not with male labor supply. Besides, out of pocket health expenditures had a negative relationship with male labor supply, but no significant relationships with female labor supply. Conclusion The effects of the health variables were more severe in the female labor supply model compared to the male model. Countries must pay attention to women’s health more and more to change the labor supply. PMID:26060746
Dana, Saswati; Nakakuki, Takashi; Hatakeyama, Mariko; Kimura, Shuhei; Raha, Soumyendu
2011-01-01
Mutation and/or dysfunction of signaling proteins in the mitogen activated protein kinase (MAPK) signal transduction pathway are frequently observed in various kinds of human cancer. Consistent with this fact, in the present study, we experimentally observe that the epidermal growth factor (EGF) induced activation profile of MAP kinase signaling is not straightforward dose-dependent in the PC3 prostate cancer cells. To find out what parameters and reactions in the pathway are involved in this departure from the normal dose-dependency, a model-based pathway analysis is performed. The pathway is mathematically modeled with 28 rate equations yielding those many ordinary differential equations (ODE) with kinetic rate constants that have been reported to take random values in the existing literature. This has led to us treating the ODE model of the pathways kinetics as a random differential equations (RDE) system in which the parameters are random variables. We show that our RDE model captures the uncertainty in the kinetic rate constants as seen in the behavior of the experimental data and more importantly, upon simulation, exhibits the abnormal EGF dose-dependency of the activation profile of MAP kinase signaling in PC3 prostate cancer cells. The most likely set of values of the kinetic rate constants obtained from fitting the RDE model into the experimental data is then used in a direct transcription based dynamic optimization method for computing the changes needed in these kinetic rate constant values for the restoration of the normal EGF dose response. The last computation identifies the parameters, i.e., the kinetic rate constants in the RDE model, that are the most sensitive to the change in the EGF dose response behavior in the PC3 prostate cancer cells. The reactions in which these most sensitive parameters participate emerge as candidate drug targets on the signaling pathway. 2011 Elsevier Ireland Ltd. All rights reserved.
Two-component Structure in the Entanglement Spectrum of Highly Excited States
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Chamon, Claudio; Hamma, Alioscia; Mucciolo, Eduardo
We study the entanglement spectrum of highly excited eigenstates of two known models which exhibit a many-body localization transition, namely the one-dimensional random-field Heisenberg model and the quantum random energy model. Our results indicate that the entanglement spectrum shows a ``two-component'' structure: a universal part that is associated to Random Matrix Theory, and a non-universal part that is model dependent. The non-universal part manifests the deviation of the highly excited eigenstate from a true random state even in the thermalized phase where the Eigenstate Thermalization Hypothesis holds. The fraction of the spectrum containing the universal part decreases continuously as one approaches the critical point and vanishes in the localized phase in the thermodynamic limit. We use the universal part fraction to construct a new order parameter for the many-body delocalized-to-localized transition. Two toy models based on Rokhsar-Kivelson type wavefunctions are constructed and their entanglement spectra are shown to exhibit the same structure.
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution
NASA Astrophysics Data System (ADS)
Zhao, Chen; Sichitiu, Mihail L.
Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Moderation analysis with missing data in the predictors.
Zhang, Qian; Wang, Lijuan
2017-12-01
The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Patterson, Michelle L.; Moniruzzaman, Akm; Frankish, C. James; Somers, Julian
2013-01-01
Objectives. We examined the relationship between substance dependence and residential stability in homeless adults with current mental disorders 12 months after randomization to Housing First programs or treatment as usual (no housing or support through the study). Methods. The Vancouver At Home study in Canada included 2 randomized controlled trials of Housing First interventions. Eligible participants met the criteria for homelessness or precarious housing, as well as a current mental disorder. Residential stability was defined as the number of days in stable residences 12 months after randomization. We used negative binomial regression modeling to examine the independent association between residential stability and substance dependence. Results. We recruited 497 participants, and 58% (n = 288) met the criteria for substance dependence. We found no significant association between substance dependence and residential stability (adjusted incidence rate ratio = 0.97; 95% confidence interval = 0.69, 1.35) after adjusting for housing intervention, employment, sociodemographics, chronic health conditions, mental disorder severity, psychiatric symptoms, and lifetime duration of homelessness. Conclusions. People with mental disorders might achieve similar levels of housing stability from Housing First regardless of whether they experience concurrent substance dependence. PMID:24148035
ERIC Educational Resources Information Center
Ipek, Ismail
2010-01-01
The purpose of this study was to investigate the effects of CBI lesson sequence type and cognitive style of field dependence on learning from Computer-Based Cooperative Instruction (CBCI) in WEB on the dependent measures, achievement, reading comprehension and reading rate. Eighty-seven college undergraduate students were randomly assigned to…
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
ERIC Educational Resources Information Center
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
Covariance functions for body weight from birth to maturity in Nellore cows.
Boligon, A A; Mercadante, M E Z; Forni, S; Lôbo, R B; Albuquerque, L G
2010-03-01
The objective of this study was to estimate (co)variance functions using random regression models on Legendre polynomials for the analysis of repeated measures of BW from birth to adult age. A total of 82,064 records from 8,145 females were analyzed. Different models were compared. The models included additive direct and maternal effects, and animal and maternal permanent environmental effects as random terms. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of animal age (cubic regression) were considered as random covariables. Eight models with polynomials of third to sixth order were used to describe additive direct and maternal effects, and animal and maternal permanent environmental effects. Residual effects were modeled using 1 (i.e., assuming homogeneity of variances across all ages) or 5 age classes. The model with 5 classes was the best to describe the trajectory of residuals along the growth curve. The model including fourth- and sixth-order polynomials for additive direct and animal permanent environmental effects, respectively, and third-order polynomials for maternal genetic and maternal permanent environmental effects were the best. Estimates of (co)variance obtained with the multi-trait and random regression models were similar. Direct heritability estimates obtained with the random regression models followed a trend similar to that obtained with the multi-trait model. The largest estimates of maternal heritability were those of BW taken close to 240 d of age. In general, estimates of correlation between BW from birth to 8 yr of age decreased with increasing distance between ages.
Fu, Yu-Xuan; Kang, Yan-Mei; Xie, Yong
2018-01-01
The FitzHugh–Nagumo model is improved to consider the effect of the electromagnetic induction on single neuron. On the basis of investigating the Hopf bifurcation behavior of the improved model, stochastic resonance in the stochastic version is captured near the bifurcation point. It is revealed that a weak harmonic oscillation in the electromagnetic disturbance can be amplified through stochastic resonance, and it is the cooperative effect of random transition between the resting state and the large amplitude oscillating state that results in the resonant phenomenon. Using the noise dependence of the mean of interburst intervals, we essentially suggest a biologically feasible clue for detecting weak signal by means of neuron model with subcritical Hopf bifurcation. These observations should be helpful in understanding the influence of the magnetic field to neural electrical activity. PMID:29467642
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
Fu, Yu-Xuan; Kang, Yan-Mei; Xie, Yong
2018-01-01
The FitzHugh-Nagumo model is improved to consider the effect of the electromagnetic induction on single neuron. On the basis of investigating the Hopf bifurcation behavior of the improved model, stochastic resonance in the stochastic version is captured near the bifurcation point. It is revealed that a weak harmonic oscillation in the electromagnetic disturbance can be amplified through stochastic resonance, and it is the cooperative effect of random transition between the resting state and the large amplitude oscillating state that results in the resonant phenomenon. Using the noise dependence of the mean of interburst intervals, we essentially suggest a biologically feasible clue for detecting weak signal by means of neuron model with subcritical Hopf bifurcation. These observations should be helpful in understanding the influence of the magnetic field to neural electrical activity.
Memory effects on a resonate-and-fire neuron model subjected to Ornstein-Uhlenbeck noise
NASA Astrophysics Data System (ADS)
Paekivi, S.; Mankin, R.; Rekker, A.
2017-10-01
We consider a generalized Langevin equation with an exponentially decaying memory kernel as a model for the firing process of a resonate-and-fire neuron. The effect of temporally correlated random neuronal input is modeled as Ornstein-Uhlenbeck noise. In the noise-induced spiking regime of the neuron, we derive exact analytical formulas for the dependence of some statistical characteristics of the output spike train, such as the probability distribution of the interspike intervals (ISIs) and the survival probability, on the parameters of the input stimulus. Particularly, on the basis of these exact expressions, we have established sufficient conditions for the occurrence of memory-time-induced transitions between unimodal and multimodal structures of the ISI density and a critical damping coefficient which marks a dynamical transition in the behavior of the system.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Murakami, Takashi; Li, Shukuan; Han, Qinghong; Tan, Yuying; Kiyuna, Tasuku; Igarashi, Kentaro; Kawaguchi, Kei; Hwang, Ho Kyoung; Miyake, Kentaro; Singh, Arun S.; Nelson, Scott D.; Dry, Sarah M.; Li, Yunfeng; Hiroshima, Yukihiko; Lwin, Thinzar M.; DeLong, Jonathan C.; Chishima, Takashi; Tanaka, Kuniya; Bouvet, Michael; Endo, Itaru; Eilber, Fritz C.; Hoffman, Robert M.
2017-01-01
Methionine dependence is due to the overuse of methionine for aberrant transmethylation reactions in cancer. Methionine dependence may be the only general metabolic defect in cancer. In order to exploit methionine dependence for therapy, our laboratory previously cloned L-methionine α-deamino-γ-mercaptomethane lyase [EC 4.4.1.11]). The cloned methioninase, termed recombinant methioninase, or rMETase, has been tested in mouse models of human cancer cell lines. Ewing's sarcoma is recalcitrant disease even though development of multimodal therapy has improved patients'outcome. Here we report efficacy of rMETase against Ewing's sarcoma in a patient-derived orthotopic xenograft (PDOX) model. The Ewing's sarcoma was implanted in the right chest wall of nude mice to establish a PDOX model. Eight Ewing's sarcoma PDOX mice were randomized into untreated control group (n = 4) and rMETase treatment group (n = 4). rMETase (100 units) was injected intraperitoneally (i.p.) every 24 hours for 14 consecutive days. All mice were sacrificed on day-15, 24 hours after the last rMETase administration. rMETase effectively reduced tumor growth compared to untreated control. The methionine level both of plasma and supernatants derived from sonicated tumors was lower in the rMETase group. Body weight did not significantly differ at any time points between the 2 groups. The present study is the first demonstrating rMETase efficacy in a PDOX model, suggesting potential clinical development, especially in recalcitrant cancers such as Ewing's sarcoma. PMID:28404944
Murakami, Takashi; Li, Shukuan; Han, Qinghong; Tan, Yuying; Kiyuna, Tasuku; Igarashi, Kentaro; Kawaguchi, Kei; Hwang, Ho Kyoung; Miyake, Kentaro; Singh, Arun S; Nelson, Scott D; Dry, Sarah M; Li, Yunfeng; Hiroshima, Yukihiko; Lwin, Thinzar M; DeLong, Jonathan C; Chishima, Takashi; Tanaka, Kuniya; Bouvet, Michael; Endo, Itaru; Eilber, Fritz C; Hoffman, Robert M
2017-05-30
Methionine dependence is due to the overuse of methionine for aberrant transmethylation reactions in cancer. Methionine dependence may be the only general metabolic defect in cancer. In order to exploit methionine dependence for therapy, our laboratory previously cloned L-methionine α-deamino-γ-mercaptomethane lyase [EC 4.4.1.11]). The cloned methioninase, termed recombinant methioninase, or rMETase, has been tested in mouse models of human cancer cell lines. Ewing's sarcoma is recalcitrant disease even though development of multimodal therapy has improved patients'outcome. Here we report efficacy of rMETase against Ewing's sarcoma in a patient-derived orthotopic xenograft (PDOX) model. The Ewing's sarcoma was implanted in the right chest wall of nude mice to establish a PDOX model. Eight Ewing's sarcoma PDOX mice were randomized into untreated control group (n = 4) and rMETase treatment group (n = 4). rMETase (100 units) was injected intraperitoneally (i.p.) every 24 hours for 14 consecutive days. All mice were sacrificed on day-15, 24 hours after the last rMETase administration. rMETase effectively reduced tumor growth compared to untreated control. The methionine level both of plasma and supernatants derived from sonicated tumors was lower in the rMETase group. Body weight did not significantly differ at any time points between the 2 groups. The present study is the first demonstrating rMETase efficacy in a PDOX model, suggesting potential clinical development, especially in recalcitrant cancers such as Ewing's sarcoma.
Compensation for Lithography Induced Process Variations during Physical Design
NASA Astrophysics Data System (ADS)
Chin, Eric Yiow-Bing
This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.
Persistence of physical activity in middle age: a nonlinear dynamic panel approach.
Kumagai, Narimasa; Ogura, Seiritsu
2014-09-01
No prior investigation has considered the effects of state dependence and unobserved heterogeneity on the relationship between regular physical activity (RPA) and latent health stock (LHS). Accounting for state dependence corrects the possible overestimation of the impact of socioeconomic factors. We estimated the degree of the state dependence of RPA and LHS among middle-aged Japanese workers. The 5 years' longitudinal data used in this study were taken from the Longitudinal Survey of Middle and Elderly Persons. Individual heterogeneity was found for both RPA and LHS, and the dynamic random-effects probit model provided the best specification. A smoking habit, low educational attainment, longer work hours, and longer commuting time had negative effects on RPA participation. RPA had positive effects on LHS, taking into consideration the possibility of confounding with other lifestyle variables. The degree of state dependence of LHS was positive and significant. Increasing the intensity of RPA had positive effects on LHS and caused individuals with RPA to exhibit greater persistence of LHS compared to individuals without RPA. This result implies that policy interventions that promote RPA, such as smoking cessation, have lasting consequences. We concluded that smoking cessation is an important health policy to increase both the participation in RPA and LHS.
Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models.
Elben, A; Vermersch, B; Dalmonte, M; Cirac, J I; Zoller, P
2018-02-02
We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.
Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models
NASA Astrophysics Data System (ADS)
Elben, A.; Vermersch, B.; Dalmonte, M.; Cirac, J. I.; Zoller, P.
2018-02-01
We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.
Gaps between avalanches in one-dimensional random-field Ising models
NASA Astrophysics Data System (ADS)
Nampoothiri, Jishnu N.; Ramola, Kabir; Sabhapandit, Sanjib; Chakraborty, Bulbul
2017-09-01
We analyze the statistics of gaps (Δ H ) between successive avalanches in one-dimensional random-field Ising models (RFIMs) in an external field H at zero temperature. In the first part of the paper we study the nearest-neighbor ferromagnetic RFIM. We map the sequence of avalanches in this system to a nonhomogeneous Poisson process with an H -dependent rate ρ (H ) . We use this to analytically compute the distribution of gaps P (Δ H ) between avalanches as the field is increased monotonically from -∞ to +∞ . We show that P (Δ H ) tends to a constant C (R ) as Δ H →0+ , which displays a nontrivial behavior with the strength of disorder R . We verify our predictions with numerical simulations. In the second part of the paper, motivated by avalanche gap distributions in driven disordered amorphous solids, we study a long-range antiferromagnetic RFIM. This model displays a gapped behavior P (Δ H )=0 up to a system size dependent offset value Δ Hoff , and P (Δ H ) ˜(ΔH -Δ Hoff) θ as Δ H →Hoff+ . We perform numerical simulations on this model and determine θ ≈0.95 (5 ) . We also discuss mechanisms which would lead to a nonzero exponent θ for general spin models with quenched random fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasnobaeva, L. A., E-mail: kla1983@mail.ru; Siberian State Medical University Moscowski Trakt 2, Tomsk, 634050; Shapovalov, A. V.
Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on dynamics local conformational perturbations (kink) propagating along the DNA molecule is investigated. Such waves have an important role in the regulation of important biological processes in living systems at the molecular level. As a dynamic model of DNA was used a modified sine-Gordon equation, simulating the rotational oscillations of bases in one of the chains DNA. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the frameworkmore » of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker– Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum. Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on the kink dynamics is investigated in the sine–Gordon model. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the framework of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker–Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum.« less
Ozmutlu, H. Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204
A Mixed Effects Randomized Item Response Model
ERIC Educational Resources Information Center
Fox, J.-P.; Wyrick, Cheryl
2008-01-01
The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…
Fluorescence quenching near small metal nanoparticles.
Pustovit, V N; Shahbazyan, T V
2012-05-28
We develop a microscopic model for fluorescence of a molecule (or semiconductor quantum dot) near a small metal nanoparticle. When a molecule is situated close to metal surface, its fluorescence is quenched due to energy transfer to the metal. We perform quantum-mechanical calculations of energy transfer rates for nanometer-sized Au nanoparticles and find that nonlocal and quantum-size effects significantly enhance dissipation in metal as compared to those predicted by semiclassical electromagnetic models. However, the dependence of transfer rates on molecule's distance to metal nanoparticle surface, d, is significantly weaker than the d(-4) behavior for flat metal surface with a sharp boundary predicted by previous calculations within random phase approximation.
Tigers on trails: occupancy modeling for cluster sampling.
Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U
2010-07-01
Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.
ERIC Educational Resources Information Center
Häfner, Isabelle; Flunger, Barbara; Dicke, Anna-Lena; Gaspard, Hanna; Brisson, Brigitte M.; Nagengast, Benjamin; Trautwein, Ulrich
2017-01-01
Using a cluster randomized field trial, the present study tested whether 2 relevance interventions affected students' value beliefs, self-concept, and effort in math differently depending on family background (socioeconomic status, family interest (FI), and parental utility value). Eighty-two classrooms were randomly assigned to either 1 of 2…
Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao
2016-12-01
To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. © The Author(s) 2014.
Ma, Xiaoye; Chen, Yong; Cole, Stephen R.; Chu, Haitao
2014-01-01
To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. PMID:24862512
Mancino, M.; McGaugh, J.; Feldman, Z.; Poling, J.; Oliveto, A.
2012-01-01
This randomized clinical trial retrospectively examined the effect of Post Traumatic Stress Disorder (PTSD) and contingency management (CM) on cocaine use in opioid and cocaine dependent individuals maintained on high or low-dose LAAM randomly assigned to CM or a yoked-control condition. Cocaine-positive urines decreased more rapidly over time in those without PTSD versus those with PTSD in the non-contingency condition. In participants with PTSD, CM resulted in fewer cocaine positive urines compared to the non-contingent condition. This suggests that CM may help improve the potentially worse outcomes in opioid-and cocaine dependent individuals with PTSD compared to those without PTSD. PMID:20163389
Structure of a randomly grown 2-d network.
Ajazi, Fioralba; Napolitano, George M; Turova, Tatyana; Zaurbek, Izbassar
2015-10-01
We introduce a growing random network on a plane as a model of a growing neuronal network. The properties of the structure of the induced graph are derived. We compare our results with available data. In particular, it is shown that depending on the parameters of the model the system undergoes in time different phases of the structure. We conclude with a possible explanation of some empirical data on the connections between neurons. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Serinaldi, Francesco; Kilsby, Chris G.
2013-06-01
The information contained in hyetographs and hydrographs is often synthesized by using key properties such as the peak or maximum value Xp, volume V, duration D, and average intensity I. These variables play a fundamental role in hydrologic engineering as they are used, for instance, to define design hyetographs and hydrographs as well as to model and simulate the rainfall and streamflow processes. Given their inherent variability and the empirical evidence of the presence of a significant degree of association, such quantities have been studied as correlated random variables suitable to be modeled by multivariate joint distribution functions. The advent of copulas in geosciences simplified the inference procedures allowing for splitting the analysis of the marginal distributions and the study of the so-called dependence structure or copula. However, the attention paid to the modeling task has overlooked a more thorough study of the true nature and origin of the relationships that link Xp,V,D, and I. In this study, we apply a set of ad hoc bootstrap algorithms to investigate these aspects by analyzing the hyetographs and hydrographs extracted from 282 daily rainfall series from central eastern Europe, three 5 min rainfall series from central Italy, 80 daily streamflow series from the continental United States, and two sets of 200 simulated universal multifractal time series. Our results show that all the pairwise dependence structures between Xp,V,D, and I exhibit some key properties that can be reproduced by simple bootstrap algorithms that rely on a standard univariate resampling without resort to multivariate techniques. Therefore, the strong similarities between the observed dependence structures and the agreement between the observed and bootstrap samples suggest the existence of a numerical generating mechanism based on the superposition of the effects of sampling data at finite time steps and the process of summing realizations of independent random variables over random durations. We also show that the pairwise dependence structures are weakly dependent on the internal patterns of the hyetographs and hydrographs, meaning that the temporal evolution of the rainfall and runoff events marginally influences the mutual relationships of Xp,V,D, and I. Finally, our findings point out that subtle and often overlooked deterministic relationships between the properties of the event hyetographs and hydrographs exist. Confusing these relationships with genuine stochastic relationships can lead to an incorrect application of multivariate distributions and copulas and to misleading results.
A Bayesian, generalized frailty model for comet assays.
Ghebretinsae, Aklilu Habteab; Faes, Christel; Molenberghs, Geert; De Boeck, Marlies; Geys, Helena
2013-05-01
This paper proposes a flexible modeling approach for so-called comet assay data regularly encountered in preclinical research. While such data consist of non-Gaussian outcomes in a multilevel hierarchical structure, traditional analyses typically completely or partly ignore this hierarchical nature by summarizing measurements within a cluster. Non-Gaussian outcomes are often modeled using exponential family models. This is true not only for binary and count data, but also for, example, time-to-event outcomes. Two important reasons for extending this family are for (1) the possible occurrence of overdispersion, meaning that the variability in the data may not be adequately described by the models, which often exhibit a prescribed mean-variance link, and (2) the accommodation of a hierarchical structure in the data, owing to clustering in the data. The first issue is dealt with through so-called overdispersion models. Clustering is often accommodated through the inclusion of random subject-specific effects. Though not always, one conventionally assumes such random effects to be normally distributed. In the case of time-to-event data, one encounters, for example, the gamma frailty model (Duchateau and Janssen, 2007 ). While both of these issues may occur simultaneously, models combining both are uncommon. Molenberghs et al. ( 2010 ) proposed a broad class of generalized linear models accommodating overdispersion and clustering through two separate sets of random effects. Here, we use this method to model data from a comet assay with a three-level hierarchical structure. Although a conjugate gamma random effect is used for the overdispersion random effect, both gamma and normal random effects are considered for the hierarchical random effect. Apart from model formulation, we place emphasis on Bayesian estimation. Our proposed method has an upper hand over the traditional analysis in that it (1) uses the appropriate distribution stipulated in the literature; (2) deals with the complete hierarchical nature; and (3) uses all information instead of summary measures. The fit of the model to the comet assay is compared against the background of more conventional model fits. Results indicate the toxicity of 1,2-dimethylhydrazine dihydrochloride at different dose levels (low, medium, and high).
Engen, Steinar; Saether, Bernt-Erik
2014-03-01
We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Nonlinear consolidation in randomly heterogeneous highly compressible aquitards
NASA Astrophysics Data System (ADS)
Zapata-Norberto, Berenice; Morales-Casique, Eric; Herrera, Graciela S.
2018-05-01
Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. The effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards is investigated by means of one-dimensional Monte Carlo numerical simulations where the lower boundary represents the effect of an instant drop in hydraulic head due to groundwater pumping. Two thousand realizations are generated for each of the following parameters: hydraulic conductivity ( K), compression index ( C c), void ratio ( e) and m (an empirical parameter relating hydraulic conductivity and void ratio). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system when compared to a nonlinear consolidation model with deterministic initial parameters. The deterministic solution underestimates the ensemble average of total settlement when initial K is random. In addition, random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady-state conditions.
The feasibility and stability of large complex biological networks: a random matrix approach.
Stone, Lewi
2018-05-29
In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.
Zhang, Hai-Feng; Wu, Zhi-Xi; Tang, Ming; Lai, Ying-Cheng
2014-07-11
How effective are governmental incentives to achieve widespread vaccination coverage so as to prevent epidemic outbreak? The answer largely depends on the complex interplay among the type of incentive, individual behavioral responses, and the intrinsic epidemic dynamics. By incorporating evolutionary games into epidemic dynamics, we investigate the effects of two types of incentives strategies: partial-subsidy policy in which certain fraction of the cost of vaccination is offset, and free-subsidy policy in which donees are randomly selected and vaccinated at no cost. Through mean-field analysis and computations, we find that, under the partial-subsidy policy, the vaccination coverage depends monotonically on the sensitivity of individuals to payoff difference, but the dependence is non-monotonous for the free-subsidy policy. Due to the role models of the donees for relatively irrational individuals and the unchanged strategies of the donees for rational individuals, the free-subsidy policy can in general lead to higher vaccination coverage. Our findings indicate that any disease-control policy should be exercised with extreme care: its success depends on the complex interplay among the intrinsic mathematical rules of epidemic spreading, governmental policies, and behavioral responses of individuals.
NASA Astrophysics Data System (ADS)
Zhang, Hai-Feng; Wu, Zhi-Xi; Tang, Ming; Lai, Ying-Cheng
2014-07-01
How effective are governmental incentives to achieve widespread vaccination coverage so as to prevent epidemic outbreak? The answer largely depends on the complex interplay among the type of incentive, individual behavioral responses, and the intrinsic epidemic dynamics. By incorporating evolutionary games into epidemic dynamics, we investigate the effects of two types of incentives strategies: partial-subsidy policy in which certain fraction of the cost of vaccination is offset, and free-subsidy policy in which donees are randomly selected and vaccinated at no cost. Through mean-field analysis and computations, we find that, under the partial-subsidy policy, the vaccination coverage depends monotonically on the sensitivity of individuals to payoff difference, but the dependence is non-monotonous for the free-subsidy policy. Due to the role models of the donees for relatively irrational individuals and the unchanged strategies of the donees for rational individuals, the free-subsidy policy can in general lead to higher vaccination coverage. Our findings indicate that any disease-control policy should be exercised with extreme care: its success depends on the complex interplay among the intrinsic mathematical rules of epidemic spreading, governmental policies, and behavioral responses of individuals.
Surface roughness formation during shot peen forming
NASA Astrophysics Data System (ADS)
Koltsov, V. P.; Vinh, Le Tri; Starodubtseva, D. A.
2018-03-01
Shot peen forming (SPF) is used for forming panels and skins, and for hardening. As a rule, shot peen forming is performed after milling. Surface roughness is a complex structure, a combination of an original microrelief and shot peen forming indentations of different depths and chaotic distribution along the surface. As far as shot peen forming is a random process, surface roughness resulted from milling and shot peen forming is random too. During roughness monitoring, it is difficult to determine the basic surface area which would ensure accurate results. It can be assumed that the basic area depends on the random roughness which is characterized by the degree of shot peen forming coverage. The analysis of depth and shot peen forming indentations distribution along the surface made it possible to identify the shift of an original center profile plane and create a mathematical model for the arithmetic mean deviation of the profile. Experimental testing proved model validity and determined an inversely proportional dependency of the basic area on the degree of coverage.
NASA Astrophysics Data System (ADS)
Kozubal, Janusz; Tomanovic, Zvonko; Zivaljevic, Slobodan
2016-09-01
In the present study the numerical model of the pile embedded in marl described by a time dependent model, based on laboratory tests, is proposed. The solutions complement the state of knowledge of the monopile loaded by horizontal force in its head with respect to its random variability values in time function. The investigated reliability problem is defined by the union of failure events defined by the excessive horizontal maximal displacement of the pile head in each periods of loads. Abaqus has been used for modeling of the presented task with a two layered viscoplastic model for marl. The mechanical parameters for both parts of model: plastic and rheological were calibrated based on the creep laboratory test results. The important aspect of the problem is reliability analysis of a monopile in complex environment under random sequences of loads which help understanding the role of viscosity in nature of rock basis constructions. Due to the lack of analytical solutions the computations were done by the method of response surface in conjunction with wavelet neural network as a method recommended for time sequences process and description of nonlinear phenomenon.
NASA Astrophysics Data System (ADS)
Le Doussal, Pierre; Petković, Aleksandra; Wiese, Kay Jörg
2012-06-01
We study the motion of an elastic object driven in a disordered environment in presence of both dissipation and inertia. We consider random forces with the statistics of random walks and reduce the problem to a single degree of freedom. It is the extension of the mean-field Alessandro-Beatrice- Bertotti-Montorsi (ABBM) model in presence of an inertial mass m. While the ABBM model can be solved exactly, its extension to inertia exhibits complicated history dependence due to oscillations and backward motion. The characteristic scales for avalanche motion are studied from numerics and qualitative arguments. To make analytical progress, we consider two variants which coincide with the original model whenever the particle moves only forward. Using a combination of analytical and numerical methods together with simulations, we characterize the distributions of instantaneous acceleration and velocity, and compare them in these three models. We show that for large driving velocity, all three models share the same large-deviation function for positive velocities, which is obtained analytically for small and large m, as well as for m=6/25. The effect of small additional thermal and quantum fluctuations can be treated within an approximate method.
Non-consensus opinion model with a neutral view on complex networks
NASA Astrophysics Data System (ADS)
Tian, Zihao; Dong, Gaogao; Du, Ruijin; Ma, Jing
2016-05-01
A nonconsensus opinion (NCO) model was introduced recently, which allows the stable coexistence of minority and majority opinions. However, due to disparities in the knowledge, experiences, and personality or self-protection of agents, they often remain neutral when faced with some opinions in real scenarios. To address this issue, we propose a general non-consensus opinion model with neutral view (NCON) and we define the dynamic opinion change process. We applied the NCON model to different topological networks and studied the formation of opinion clusters. In the case of random graphs, random regular networks, and scale-free (SF) networks, we found that the system moved from a continuous phase transition to a discontinuous phase transition as the connectivity density and exponent of the SF network λ decreased and increased in the steady state, respectively. Moreover, the initial proportions of neutral opinions were found to have little effect on the proportional structure of opinions at the steady state. These results suggest that the majority choice between positive and negative opinions depends on the initial proportion of each opinion. The NCON model may have potential applications for decision makers.
NASA Technical Reports Server (NTRS)
Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George
2000-01-01
This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.
Wang, Yan; Chen, Xinguang
2015-01-01
Objective Little research has been done on alcohol use and dependence among rural residents in China, a sub-population that might be under increased stress due to the rapid modernization and urbanization processes. We aimed to assess rural residents’ levels of stress, negative emotions, resilience, alcohol use/dependence and the complex relationships among them. Methods Survey data from a large random sample (n = 1145, mean age = 35.9, SD = 7.7, 50.7% male) of rural residents in Wuhan, China were collected using Audio Computer-Assisted Self-Interview. Results The sample had high prevalence of frequently perceived stress (47%) and high prevalence of ever (54.4%), past 30-day (40.4%), and binge drinking (13.8%). Approximately 11% met the criterion for intermediate to severe alcohol dependence. Mediation analysis indicated that the association between perceived stress (predictor) and alcohol dependence (outcome) was fully mediated by anxiety (indirect effect = .203, p < .01) and depression (indict effect =.158, p < .05); moderation analysis indicated that association between stress and two negative emotions (mediators) was significantly modified by resilience (moderator); an integrative moderated mediation analysis indicated that the indirect effect from stress to alcohol dependence through negative emotions was also moderated by resilience. Conclusions Negative emotions play a key role in bridging stress and alcohol dependence, while resilience significantly buffers the impact of stress on depression, reducing the risk of alcohol dependence. Resilience training may be an effective component for alcohol intervention in rural China. PMID:26342628
Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey
Parsons, T.
2004-01-01
New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.
Simultaneous stochastic inversion for geomagnetic main field and secular variation. II - 1820-1980
NASA Technical Reports Server (NTRS)
Bloxham, Jeremy; Jackson, Andrew
1989-01-01
With the aim of producing readable time-dependent maps of the geomagnetic field at the core-mantle boundary, the method of simultaneous stochastic inversion for the geomagnetic main field and secular variation, described by Bloxham (1987), was applied to survey data from the period 1820-1980 to yield two time-dependent geomagnetic-field models, one for the period 1900-1980 and the other for 1820-1900. Particular consideration was given to the effect of crustal fields on observations. It was found that the existing methods of accounting for these fields as sources of random noise are inadequate in two circumstances: (1) when sequences of measurements are made at one particular site, and (2) for measurements made at satellite altitude. The present model shows many of the features in the earth's magnetic field at the core-mantle boundary described by Bloxham and Gubbins (1985) and supports many of their earlier conclusions.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
Polaronic effects at finite temperatures in the B850 ring of the LH2 complex.
Chorošajev, Vladimir; Rancova, Olga; Abramavicius, Darius
2016-03-21
Energy transfer and relaxation dynamics in the B850 ring of LH2 molecular aggregates are described, taking into account the polaronic effects, by a stochastic time-dependent variational approach. We explicitly include the finite temperature effects in the model by sampling the initial conditions of the vibrational states randomly. This is in contrast to previous applications of the variational approach, which consider only the zero-temperature case. The method allows us to obtain both the microscopic dynamics at the single-wavefunction level and the thermally averaged picture of excitation relaxation over a wide range of temperatures. Spectroscopic observables such as temperature dependent absorption and time-resolved fluorescence spectra are calculated. Microscopic wavefunction evolution is quantified by introducing the exciton participation (localization) length and the exciton coherence length. Their asymptotic temperature dependence demonstrates that the environmental polaronic effects range from exciton self-trapping and excitonic polaron formation at low temperatures to thermally induced state delocalization and decoherence at high temperatures. While the transition towards the polaronic state can be observed on the wavefunction level, it does not produce a discernible effect on the calculated spectroscopic observables.
Quantifying Biomass from Point Clouds by Connecting Representations of Ecosystem Structure
NASA Astrophysics Data System (ADS)
Hendryx, S. M.; Barron-Gafford, G.
2017-12-01
Quantifying terrestrial ecosystem biomass is an essential part of monitoring carbon stocks and fluxes within the global carbon cycle and optimizing natural resource management. Point cloud data such as from lidar and structure from motion can be effective for quantifying biomass over large areas, but significant challenges remain in developing effective models that allow for such predictions. Inference models that estimate biomass from point clouds are established in many environments, yet, are often scale-dependent, needing to be fitted and applied at the same spatial scale and grid size at which they were developed. Furthermore, training such models typically requires large in situ datasets that are often prohibitively costly or time-consuming to obtain. We present here a scale- and sensor-invariant framework for efficiently estimating biomass from point clouds. Central to this framework, we present a new algorithm, assignPointsToExistingClusters, that has been developed for finding matches between in situ data and clusters in remotely-sensed point clouds. The algorithm can be used for assessing canopy segmentation accuracy and for training and validating machine learning models for predicting biophysical variables. We demonstrate the algorithm's efficacy by using it to train a random forest model of above ground biomass in a shrubland environment in Southern Arizona. We show that by learning a nonlinear function to estimate biomass from segmented canopy features we can reduce error, especially in the presence of inaccurate clusterings, when compared to a traditional, deterministic technique to estimate biomass from remotely measured canopies. Our random forest on cluster features model extends established methods of training random forest regressions to predict biomass of subplots but requires significantly less training data and is scale invariant. The random forest on cluster features model reduced mean absolute error, when evaluated on all test data in leave one out cross validation, by 40.6% from deterministic mesquite allometry and 35.9% from the inferred ecosystem-state allometric function. Our framework should allow for the inference of biomass more efficiently than common subplot methods and more accurately than individual tree segmentation methods in densely vegetated environments.
Proactive tobacco treatment and population-level cessation: a pragmatic randomized clinical trial.
Fu, Steven S; van Ryn, Michelle; Sherman, Scott E; Burgess, Diana J; Noorbaloochi, Siamak; Clothier, Barbara; Taylor, Brent C; Schlede, Carolyn M; Burke, Randy S; Joseph, Anne M
2014-05-01
Current tobacco use treatment approaches require smokers to request treatment or depend on the provider to initiate smoking cessation care and are therefore reactive. Most smokers do not receive evidence-based treatments for tobacco use that include both behavioral counseling and pharmacotherapy. To assess the effect of a proactive, population-based tobacco cessation care model on use of evidence-based tobacco cessation treatments and on population-level smoking cessation rates (ie, abstinence among all smokers including those who use and do not use treatment) compared with usual care among a diverse population of current smokers. The Veterans Victory Over Tobacco Study, a pragmatic randomized clinical trial involving a population-based registry of current smokers aged 18 to 80 years. A total of 6400 current smokers, identified using the Department of Veterans Affairs (VA) electronic medical record, were randomized prior to contact to evaluate both the reach and effectiveness of the proactive care intervention. Current smokers were randomized to usual care or proactive care. Proactive care combined (1) proactive outreach and (2) offer of choice of smoking cessation services (telephone or in-person). Proactive outreach included mailed invitations followed by telephone outreach to motivate smokers to seek treatment with choice of services. The primary outcome was 6-month prolonged smoking abstinence at 1 year and was assessed by a follow-up survey among all current smokers regardless of interest in quitting or treatment utilization. A total of 5123 participants were included in the primary analysis. The follow-up survey response rate was 66%. The population-level, 6-month prolonged smoking abstinence rate at 1 year was 13.5% for proactive care compared with 10.9% for usual care (P = .02). Logistic regression mixed model analysis showed a significant effect of the proactive care intervention on 6-month prolonged abstinence (odds ratio [OR], 1.27 [95% CI, 1.03-1.57]). In analyses accounting for nonresponse using likelihood-based not-missing-at-random models, the effect of proactive care on 6-month prolonged abstinence persisted (OR, 1.33 [95% CI, 1.17-1.51]). Proactive, population-based tobacco cessation care using proactive outreach to connect smokers to evidence-based telephone or in-person smoking cessation services is effective for increasing long-term population-level cessation rates. clinicaltrials.gov Identifier: NCT00608426.
NASA Astrophysics Data System (ADS)
Shah, Nita H.; Shah, Arpan D.
2014-04-01
The article analyzes economic order quantity for the retailer who has to handle imperfect quality of the product and the units are subject to deteriorate at a constant rate. To control deterioration of the units in inventory, the retailer has to deploy advanced preservation technology. Another challenge for the retailer is to have perfect quality product. This requires mandatory inspection during the production process. This model is developed with the condition of random fraction of defective items. It is assumed that after inspection, the screened defective items are sold at a discounted rate instantly. Demand is considered to be price-sensitive stock-dependent. The model is incorporating effect of inflation which is critical factor globally. The objective is to maximize profit of the retailer with respect to preservation technology investment, order quantity and cycle time. The numerical example is given to validate the proposed model. Sensitivity analysis is carried out to work out managerial issues.
On a phase diagram for random neural networks with embedded spike timing dependent plasticity.
Turova, Tatyana S; Villa, Alessandro E P
2007-01-01
This paper presents an original mathematical framework based on graph theory which is a first attempt to investigate the dynamics of a model of neural networks with embedded spike timing dependent plasticity. The neurons correspond to integrate-and-fire units located at the vertices of a finite subset of 2D lattice. There are two types of vertices, corresponding to the inhibitory and the excitatory neurons. The edges are directed and labelled by the discrete values of the synaptic strength. We assume that there is an initial firing pattern corresponding to a subset of units that generate a spike. The number of activated externally vertices is a small fraction of the entire network. The model presented here describes how such pattern propagates throughout the network as a random walk on graph. Several results are compared with computational simulations and new data are presented for identifying critical parameters of the model.
ERIC Educational Resources Information Center
Wei, Liew Tze; Su-Mae, Tan; Wi, Tay Nuo
2014-01-01
The main objective of this study was to ascertain if the effectiveness of conversational narrations and non-conversational narrations in multimedia environment will be mediated by learners' field dependence and gender. 53 participants (25 field dependent and 28 field independent subjects) were randomly divided to interact with either one of…
Mixed models approaches for joint modeling of different types of responses.
Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert
2016-01-01
In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.
Generalized random sign and alert delay models for imperfect maintenance.
Dijoux, Yann; Gaudoin, Olivier
2014-04-01
This paper considers the modelling of the process of Corrective and condition-based Preventive Maintenance, for complex repairable systems. In order to take into account the dependency between both types of maintenance and the possibility of imperfect maintenance, Generalized Competing Risks models have been introduced in "Doyen and Gaudoin (J Appl Probab 43:825-839, 2006)". In this paper, we study two classes of these models, the Generalized Random Sign and Generalized Alert Delay models. A Generalized Competing Risks model can be built as a generalization of a particular Usual Competing Risks model, either by using a virtual age framework or not. The models properties are studied and their parameterizations are discussed. Finally, simulation results and an application to real data are presented.
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Random walk to a nonergodic equilibrium concept
NASA Astrophysics Data System (ADS)
Bel, G.; Barkai, E.
2006-01-01
Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.
Modeling Local Interactions during the Motion of Cyanobacteria
Galante, Amanda; Wisen, Susanne; Bhaya, Devaki; Levy, Doron
2012-01-01
Synechocystis sp., a common unicellular freshwater cyanobacterium, has been used as a model organism to study phototaxis, an ability to move in the direction of a light source. This microorganism displays a number of additional characteristics such as delayed motion, surface dependence, and a quasi-random motion, where cells move in a seemingly disordered fashion instead of in the direction of the light source, a global force on the system. These unexplained motions are thought to be modulated by local interactions between cells such as intercellular communication. In this paper, we consider only local interactions of these phototactic cells in order to mathematically model this quasi-random motion. We analyze an experimental data set to illustrate the presence of quasi-random motion and then derive a stochastic dynamic particle system modeling interacting phototactic cells. The simulations of our model are consistent with experimentally observed phototactic motion. PMID:22713858
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
Micro-Randomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions
Klasnja, Predrag; Hekler, Eric B.; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A.
2015-01-01
Objective This paper presents an experimental design, the micro-randomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals’ health behaviors. Micro-randomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. Methods The paper describes the micro-randomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Results Micro-randomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Conclusions Micro-randomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions’ effects, enabling creation of more effective JITAIs. PMID:26651463
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
A Bayesian ridge regression analysis of congestion's impact on urban expressway safety.
Shi, Qi; Abdel-Aty, Mohamed; Lee, Jaeyoung
2016-03-01
With the rapid growth of traffic in urban areas, concerns about congestion and traffic safety have been heightened. This study leveraged both Automatic Vehicle Identification (AVI) system and Microwave Vehicle Detection System (MVDS) installed on an expressway in Central Florida to explore how congestion impacts the crash occurrence in urban areas. Multiple congestion measures from the two systems were developed. To ensure more precise estimates of the congestion's effects, the traffic data were aggregated into peak and non-peak hours. Multicollinearity among traffic parameters was examined. The results showed the presence of multicollinearity especially during peak hours. As a response, ridge regression was introduced to cope with this issue. Poisson models with uncorrelated random effects, correlated random effects, and both correlated random effects and random parameters were constructed within the Bayesian framework. It was proven that correlated random effects could significantly enhance model performance. The random parameters model has similar goodness-of-fit compared with the model with only correlated random effects. However, by accounting for the unobserved heterogeneity, more variables were found to be significantly related to crash frequency. The models indicated that congestion increased crash frequency during peak hours while during non-peak hours it was not a major crash contributing factor. Using the random parameter model, the three congestion measures were compared. It was found that all congestion indicators had similar effects while Congestion Index (CI) derived from MVDS data was a better congestion indicator for safety analysis. Also, analyses showed that the segments with higher congestion intensity could not only increase property damage only (PDO) crashes, but also more severe crashes. In addition, the issues regarding the necessity to incorporate specific congestion indicator for congestion's effects on safety and to take care of the multicollinearity between explanatory variables were also discussed. By including a specific congestion indicator, the model performance significantly improved. When comparing models with and without ridge regression, the magnitude of the coefficients was altered in the existence of multicollinearity. These conclusions suggest that the use of appropriate congestion measure and consideration of multicolilnearity among the variables would improve the models and our understanding about the effects of congestion on traffic safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
NASA Astrophysics Data System (ADS)
Kim, Y. W.; Cress, R. P.
2016-11-01
Disordered binary alloys are modeled as a randomly close-packed assembly of nanocrystallites intermixed with randomly positioned atoms, i.e., glassy-state matter. The nanocrystallite size distribution is measured in a simulated macroscopic medium in two dimensions. We have also defined, and measured, the degree of crystallinity as the probability of a particle being a member of nanocrystallites. Both the distribution function and the degree of crystallinity are found to be determined by alloy composition. When heated, the nanocrystallites become smaller in size due to increasing thermal fluctuation. We have modeled this phenomenon as a case of thermal dissociation by means of the law of mass action. The crystallite size distribution function is computed for AuCu3 as a function of temperature by solving some 12 000 coupled algebraic equations for the alloy. The results show that linear thermal expansion of the specimen has contributions from the temperature dependence of the degree of crystallinity, in addition to respective thermal expansions of the nanocrystallites and glassy-state matter.
The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival
Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas
2016-01-01
Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561
Gong, Zheng; Chen, Tianrun; Ratilal, Purnima; Makris, Nicholas C
2013-11-01
An analytical model derived from normal mode theory for the accumulated effects of range-dependent multiple forward scattering is applied to estimate the temporal coherence of the acoustic field forward propagated through a continental-shelf waveguide containing random three-dimensional internal waves. The modeled coherence time scale of narrow band low-frequency acoustic field fluctuations after propagating through a continental-shelf waveguide is shown to decay with a power-law of range to the -1/2 beyond roughly 1 km, decrease with increasing internal wave energy, to be consistent with measured acoustic coherence time scales. The model should provide a useful prediction of the acoustic coherence time scale as a function of internal wave energy in continental-shelf environments. The acoustic coherence time scale is an important parameter in remote sensing applications because it determines (i) the time window within which standard coherent processing such as matched filtering may be conducted, and (ii) the number of statistically independent fluctuations in a given measurement period that determines the variance reduction possible by stationary averaging.
NASA Astrophysics Data System (ADS)
Roman, H. E.; Porto, M.; Dose, C.
2008-10-01
We analyze daily log-returns data for a set of 1200 stocks, taken from US stock markets, over a period of 2481 trading days (January 1996-November 2005). We estimate the degree of non-stationarity in daily market volatility employing a polynomial fit, used as a detrending function. We find that the autocorrelation function of absolute detrended log-returns departs strongly from the corresponding original data autocorrelation function, while the observed leverage effect depends only weakly on trends. Such effect is shown to occur when both skewness and long-time memory are simultaneously present. A fractional derivative random walk model is discussed yielding a quantitative agreement with the empirical results.
Cheraghalizadeh, J; Najafi, M N; Dashti-Naserabadi, H; Mohammadzadeh, H
2017-11-01
The self-organized criticality on the random fractal networks has many motivations, like the movement pattern of fluid in the porous media. In addition to the randomness, introducing correlation between the neighboring portions of the porous media has some nontrivial effects. In this paper, we consider the Ising-like interactions between the active sites as the simplest method to bring correlations in the porous media, and we investigate the statistics of the BTW model in it. These correlations are controlled by the artificial "temperature" T and the sign of the Ising coupling. Based on our numerical results, we propose that at the Ising critical temperature T_{c} the model is compatible with the universality class of two-dimensional (2D) self-avoiding walk (SAW). Especially the fractal dimension of the loops, which are defined as the external frontier of the avalanches, is very close to D_{f}^{SAW}=4/3. Also, the corresponding open curves has conformal invariance with the root-mean-square distance R_{rms}∼t^{3/4} (t being the parametrization of the curve) in accordance with the 2D SAW. In the finite-size study, we observe that at T=T_{c} the model has some aspects compatible with the 2D BTW model (e.g., the 1/log(L)-dependence of the exponents of the distribution functions) and some in accordance with the Ising model (e.g., the 1/L-dependence of the fractal dimensions). The finite-size scaling theory is tested and shown to be fulfilled for all statistical observables in T=T_{c}. In the off-critical temperatures in the close vicinity of T_{c} the exponents show some additional power-law behaviors in terms of T-T_{c} with some exponents that are reported in the text. The spanning cluster probability at the critical temperature also scales with L^{1/2}, which is different from the regular 2D BTW model.
NASA Astrophysics Data System (ADS)
Koskinen, Johan; Lomi, Alessandro
2013-05-01
We study the evolution of the network of foreign direct investment (FDI) in the international electricity industry during the period 1994-2003. We assume that the ties in the network of investment relations between countries are created and deleted in continuous time, according to a conditional Gibbs distribution. This assumption allows us to take simultaneously into account the aggregate predictions of the well-established gravity model of international trade as well as local dependencies between network ties connecting the countries in our sample. According to the modified version of the gravity model that we specify, the probability of observing an investment tie between two countries depends on the mass of the economies involved, their physical distance, and the tendency of the network to self-organize into local configurations of network ties. While the limiting distribution of the data generating process is an exponential random graph model, we do not assume the system to be in equilibrium. We find evidence of the effects of the standard gravity model of international trade on evolution of the global FDI network. However, we also provide evidence of significant dyadic and extra-dyadic dependencies between investment ties that are typically ignored in available research. We show that local dependencies between national electricity industries are sufficient for explaining global properties of the network of foreign direct investments. We also show, however, that network dependencies vary significantly over time giving rise to a time-heterogeneous localized process of network evolution.
Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-01-01
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914
Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-03-15
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.
ERIC Educational Resources Information Center
Titus, Janet C.; Dennis, Michael L.; Diamond, Guy; Godley, Susan H.; Babor, Thomas; Donaldson, Jean; Herrell, James; Tims, Frank; Webb, Charles
The Cannabis Youth Treatment (CYT) study is a multi-site randomized field experiment examining five outpatient treatment protocols for adolescents who abuse or are dependent on marijuana. The purpose of the CYT project is twofold: (a) to test the relative clinical effectiveness and cost-effectiveness of five promising interventions targeted at…
ERIC Educational Resources Information Center
Rhoads, Christopher
2011-01-01
Researchers planning a randomized field trial to evaluate the effectiveness of an educational intervention often face the following dilemma. They plan to recruit schools to participate in their study. The question is, "Should the researchers randomly assign individuals (either students or teachers, depending on the intervention) within schools to…
Thermal rectification in anharmonic chains under an energy-conserving noise.
Guimarães, Pedro H; Landi, Gabriel T; de Oliveira, Mário J
2015-12-01
Systems in which the heat flux depends on the direction of the flow are said to present thermal rectification. This effect has attracted much theoretical and experimental interest in recent years. However, in most theoretical models the effect is found to vanish in the thermodynamic limit, in disagreement with experiment. The purpose of this paper is to show that the rectification may be restored by including an energy-conserving noise which randomly flips the velocity of the particles with a certain rate λ. It is shown that as long as λ is nonzero, the rectification remains finite in the thermodynamic limit. This is illustrated in a classical harmonic chain subject to a quartic pinning potential (the Φ(4) model) and coupled to heat baths by Langevin equations.
Yen, A M-F; Liou, H-H; Lin, H-L; Chen, T H-H
2006-01-01
The study aimed to develop a predictive model to deal with data fraught with heterogeneity that cannot be explained by sampling variation or measured covariates. The random-effect Poisson regression model was first proposed to deal with over-dispersion for data fraught with heterogeneity after making allowance for measured covariates. Bayesian acyclic graphic model in conjunction with Markov Chain Monte Carlo (MCMC) technique was then applied to estimate the parameters of both relevant covariates and random effect. Predictive distribution was then generated to compare the predicted with the observed for the Bayesian model with and without random effect. Data from repeated measurement of episodes among 44 patients with intractable epilepsy were used as an illustration. The application of Poisson regression without taking heterogeneity into account to epilepsy data yielded a large value of heterogeneity (heterogeneity factor = 17.90, deviance = 1485, degree of freedom (df) = 83). After taking the random effect into account, the value of heterogeneity factor was greatly reduced (heterogeneity factor = 0.52, deviance = 42.5, df = 81). The Pearson chi2 for the comparison between the expected seizure frequencies and the observed ones at two and three months of the model with and without random effect were 34.27 (p = 1.00) and 1799.90 (p < 0.0001), respectively. The Bayesian acyclic model using the MCMC method was demonstrated to have great potential for disease prediction while data show over-dispersion attributed either to correlated property or to subject-to-subject variability.
Toward a Probabilistic Automata Model of Some Aspects of Code-Switching.
ERIC Educational Resources Information Center
Dearholt, D. W.; Valdes-Fallis, G.
1978-01-01
The purpose of the model is to select either Spanish or English as the language to be used; its goals at this stage of development include modeling code-switching for lexical need, apparently random code-switching, dependency of code-switching upon sociolinguistic context, and code-switching within syntactic constraints. (EJS)
Competition in a Social Structure
NASA Astrophysics Data System (ADS)
Legara, Erika Fille; Longjas, Anthony; Batac, Rene
Complex adaptive agents develop strategies in the presence of competition. In modern human societies, there is an inherent sense of locality when describing inter-agent dynamics because of its network structure. One then wonders whether the traditional advertising schemes that are globally publicized and target random individuals are as effective in attracting a larger portion of the population as those that take advantage of local neighborhoods, such as "word-of-mouth" marketing schemes. Here, we demonstrate using a differential equation model that schemes targeting local cliques within the network are more successful at gaining a larger share of the population than those that target users randomly at a global scale (e.g., television commercials, print ads, etc.). This suggests that success in the competition is dependent not only on the number of individuals in the population but also on how they are connected in the network. We further show that the model is general in nature by considering examples of competition dynamics, particularly those of business competition and language death.
Thermal transport in binary colloidal glasses: Composition dependence and percolation assessment
NASA Astrophysics Data System (ADS)
Ruckdeschel, Pia; Philipp, Alexandra; Kopera, Bernd A. F.; Bitterlich, Flora; Dulle, Martin; Pech-May, Nelson W.; Retsch, Markus
2018-02-01
The combination of various types of materials is often used to create superior composites that outperform the pure phase components. For any rational design, the thermal conductivity of the composite as a function of the volume fraction of the filler component needs to be known. When approaching the nanoscale, the homogeneous mixture of various components poses an additional challenge. Here, we investigate binary nanocomposite materials based on polymer latex beads and hollow silica nanoparticles. These form randomly mixed colloidal glasses on a sub-μ m scale. We focus on the heat transport properties through such binary assembly structures. The thermal conductivity can be well described by the effective medium theory. However, film formation of the soft polymer component leads to phase segregation and a mismatch between existing mixing models. We confirm our experimental data by finite element modeling. This additionally allowed us to assess the onset of thermal transport percolation in such random particulate structures. Our study contributes to a better understanding of thermal transport through heterostructured particulate assemblies.
Dynamics of the Random Field Ising Model
NASA Astrophysics Data System (ADS)
Xu, Jian
The Random Field Ising Model (RFIM) is a general tool to study disordered systems. Crackling noise is generated when disordered systems are driven by external forces, spanning a broad range of sizes. Systems with different microscopic structures such as disordered mag- nets and Earth's crust have been studied under the RFIM. In this thesis, we investigated the domain dynamics and critical behavior in two dipole-coupled Ising ferromagnets Nd2Fe14B and LiHoxY 1-xF4. With Tc well above room temperature, Nd2Fe14B has shown reversible disorder when exposed to an external transverse field and crosses between two universality classes in the strong and weak disorder limits. Besides tunable disorder, LiHoxY1-xF4 has shown quantum tunneling effects arising from quantum fluctuations, providing another mechanism for domain reversal. Universality within and beyond power law dependence on avalanche size and energy were studied in LiHo0.65Y0.35 F4.
Adjusting for multiple prognostic factors in the analysis of randomised trials
2013-01-01
Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size. PMID:23898993
Spreading Effect in Industrial Complex Network Based on Revised Structural Holes Theory
Ye, Qing; Guan, Jun
2016-01-01
This paper analyzed the spreading effect of industrial sectors with complex network model under perspective of econophysics. Input-output analysis, as an important research tool, focuses more on static analysis. However, the fundamental aim of industry analysis is to figure out how interaction between different industries makes impacts on economic development, which turns out to be a dynamic process. Thus, industrial complex network based on input-output tables from WIOD is proposed to be a bridge connecting accurate static quantitative analysis and comparable dynamic one. With application of revised structural holes theory, flow betweenness and random walk centrality were respectively chosen to evaluate industrial sectors’ long-term and short-term spreading effect process in this paper. It shows that industries with higher flow betweenness or random walk centrality would bring about more intensive industrial spreading effect to the industrial chains they stands in, because value stream transmission of industrial sectors depends on how many products or services it can get from the other ones, and they are regarded as brokers with bigger information superiority and more intermediate interests. PMID:27218468
Spreading Effect in Industrial Complex Network Based on Revised Structural Holes Theory.
Xing, Lizhi; Ye, Qing; Guan, Jun
2016-01-01
This paper analyzed the spreading effect of industrial sectors with complex network model under perspective of econophysics. Input-output analysis, as an important research tool, focuses more on static analysis. However, the fundamental aim of industry analysis is to figure out how interaction between different industries makes impacts on economic development, which turns out to be a dynamic process. Thus, industrial complex network based on input-output tables from WIOD is proposed to be a bridge connecting accurate static quantitative analysis and comparable dynamic one. With application of revised structural holes theory, flow betweenness and random walk centrality were respectively chosen to evaluate industrial sectors' long-term and short-term spreading effect process in this paper. It shows that industries with higher flow betweenness or random walk centrality would bring about more intensive industrial spreading effect to the industrial chains they stands in, because value stream transmission of industrial sectors depends on how many products or services it can get from the other ones, and they are regarded as brokers with bigger information superiority and more intermediate interests.
Inhomogeneous diffusion and ergodicity breaking induced by global memory effects
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2016-11-01
We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.
Cure fraction model with random effects for regional variation in cancer survival.
Seppä, Karri; Hakulinen, Timo; Kim, Hyon-Jung; Läärä, Esa
2010-11-30
Assessing regional differences in the survival of cancer patients is important but difficult when separate regions are small or sparsely populated. In this paper, we apply a mixture cure fraction model with random effects to cause-specific survival data of female breast cancer patients collected by the population-based Finnish Cancer Registry. Two sets of random effects were used to capture the regional variation in the cure fraction and in the survival of the non-cured patients, respectively. This hierarchical model was implemented in a Bayesian framework using a Metropolis-within-Gibbs algorithm. To avoid poor mixing of the Markov chain, when the variance of either set of random effects was close to zero, posterior simulations were based on a parameter-expanded model with tailor-made proposal distributions in Metropolis steps. The random effects allowed the fitting of the cure fraction model to the sparse regional data and the estimation of the regional variation in 10-year cause-specific breast cancer survival with a parsimonious number of parameters. Before 1986, the capital of Finland clearly stood out from the rest, but since then all the 21 hospital districts have achieved approximately the same level of survival. Copyright © 2010 John Wiley & Sons, Ltd.
Willingness to treat drug dependence and depression: comparisons of future health professionals
Ahmedani, Brian K; Kubiak, Sheryl Pimlott; Rios-Bedoya, Carlos F; Mickus, Maureen; Anthony, James C
2011-01-01
Purpose Stigma-related feelings, including degree of enthusiasm and willingness to work with alcohol, drug, and mental disorder (ADM) patients, as well as anticipated success in such work, will be required for the United States to be successful in its new initiatives for ADM screening, brief intervention, and effective referral to treatment and rehabilitation services (SBIRT). This study investigates students of medicine and social work with respect to their stigma-related feelings and degree of enthusiasm or willingness to treat patients affected by alcohol dependence, nicotine dependence, or major depression. Inference is strengthened by an anonymous online survey approach, with use of randomized reinforcers to gain at least partial experimental control of nonparticipation biases that otherwise are present in student survey data. Material and methods All students on required course rosters were asked to participate in a two-part in-class and online assessment; 222 participated, with a gradient of participation induced via randomly drawn reinforcers for online survey participation. Between-group comparisons were made with a multivariate generalized linear model and generalized estimating equations approach that adjusts for covariates. Results Medical and social work students did not differ from each other with respect to their willingness to treat patients affected by major depression, alcohol dependence, or nicotine dependence, but together were less willing to treat nicotine and alcohol dependence-affected patients as compared to depression-affected patients. Personal history was not associated with the students’ willingness to treat, but men were less willing to treat. Drawing strength from the randomized reinforcer experimental design nested within this survey approach, the study evidence suggests potential nonparticipation bias in standard surveys on this topic. Conclusion These results indicate that future health professionals may prefer to treat depression as opposed to drug dependence conditions. For SBIRT success, curriculum change with educational interventions may be needed to increase willingness to treat patients with neuropsychiatric conditions such as drug dependence. Future research requires attention to a possible problem of nonparticipation bias in surveys of this type. PMID:21731413
Solvency supervision based on a total balance sheet approach
NASA Astrophysics Data System (ADS)
Pitselis, Georgios
2009-11-01
In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.
Random unitary evolution model of quantum Darwinism with pure decoherence
NASA Astrophysics Data System (ADS)
Balanesković, Nenad
2015-10-01
We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.
ERIC Educational Resources Information Center
Aydin, Burak; Leite, Walter L.; Algina, James
2016-01-01
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Anomalous Growth of Aging Populations
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.
2016-04-01
We consider a discrete-time population dynamics with age-dependent structure. At every time step, one of the alive individuals from the population is chosen randomly and removed with probability q_k depending on its age, whereas a new individual of age 1 is born with probability r. The model can also describe a single queue in which the service order is random while the service efficiency depends on a customer's "age" in the queue. We propose a mean field approximation to investigate the long-time asymptotic behavior of the mean population size. The age dependence is shown to lead to anomalous power-law growth of the population at the critical regime. The scaling exponent is determined by the asymptotic behavior of the probabilities q_k at large k. The mean field approximation is validated by Monte Carlo simulations.
Geometric Accuracy Analysis of Worlddem in Relation to AW3D30, Srtm and Aster GDEM2
NASA Astrophysics Data System (ADS)
Bayburt, S.; Kurtak, A. B.; Büyüksalih, G.; Jacobsen, K.
2017-05-01
In a project area close to Istanbul the quality of WorldDEM, AW3D30, SRTM DSM and ASTER GDEM2 have been analyzed in relation to a reference aerial LiDAR DEM and to each other. The random and the systematic height errors have been separated. The absolute offset for all height models in X, Y and Z is within the expectation. The shifts have been respected in advance for a satisfying estimation of the random error component. All height models are influenced by some tilts, different in size. In addition systematic deformations can be seen not influencing the standard deviation too much. The delivery of WorldDEM includes information about the height error map which is based on the interferometric phase errors, and the number and location of coverage's from different orbits. A dependency of the height accuracy from the height error map information and the number of coverage's can be seen, but it is smaller as expected. WorldDEM is more accurate as the other investigated height models and with 10 m point spacing it includes more morphologic details, visible at contour lines. The morphologic details are close to the details based on the LiDAR digital surface model (DSM). As usual a dependency of the accuracy from the terrain slope can be seen. In forest areas the canopy definition of InSAR X- and C-band height models as well as for the height models based on optical satellite images is not the same as the height definition by LiDAR. In addition the interferometric phase uncertainty over forest areas is larger. Both effects lead to lower height accuracy in forest areas, also visible in the height error map.
NASA Astrophysics Data System (ADS)
Cugliandolo, Leticia F.; Lozano, Gustavo S.; Nessi, Nicolás; Picco, Marco; Tartaglia, Alessandro
2018-06-01
We study the Hamiltonian dynamics of the spherical spin model with fully-connected two-body random interactions. In the statistical physics framework, the potential energy is of the so-called p = 2 kind, closely linked to the scalar field theory. Most importantly for our setting, the energy conserving dynamics are equivalent to the ones of the Neumann integrable model. We take initial conditions from the Boltzmann equilibrium measure at a temperature that can be above or below the static phase transition, typical of a disordered (paramagnetic) or of an ordered (disguised ferromagnetic) equilibrium phase. We subsequently evolve the configurations with Newton dynamics dictated by a different Hamiltonian, obtained from an instantaneous global rescaling of the elements in the interaction random matrix. In the limit of infinitely many degrees of freedom, , we identify three dynamical phases depending on the parameters that characterise the initial state and the final Hamiltonian. We next set the analysis of the system with finite number of degrees of freedom in terms of N non-linearly coupled modes. We argue that in the limit the modes decouple at long times. We evaluate the mode temperatures and we relate them to the frequency-dependent effective temperature measured with the fluctuation-dissipation relation in the frequency domain, similarly to what was recently proposed for quantum integrable cases. Finally, we analyse the N ‑ 1 integrals of motion, notably, their scaling with N, and we use them to show that the system is out of equilibrium in all phases, even for parameters that show an apparent Gibbs–Boltzmann behaviour of the global observables. We elaborate on the role played by these constants of motion after the quench and we briefly discuss the possible description of the asymptotic dynamics in terms of a generalised Gibbs ensemble.
DeFulio, Anthony; Donlin, Wendy D; Wong, Conrad J; Silverman, Kenneth
2009-09-01
Due to the chronic nature of cocaine dependence, long-term maintenance treatments may be required to sustain abstinence. Abstinence reinforcement is among the most effective means of initiating cocaine abstinence. Practical and effective means of maintaining abstinence reinforcement programs over time are needed. To determine whether employment-based abstinence reinforcement can be an effective long-term maintenance intervention for cocaine dependence. Participants (n = 128) were enrolled in a 6-month job skills training and abstinence initiation program. Participants who initiated abstinence, attended regularly and developed needed job skills during the first 6 months were hired as operators in a data entry business and assigned randomly to an employment-only (control, n = 24) or abstinence-contingent employment (n = 27) group. A non-profit data entry business. Participants Unemployed welfare recipients who used cocaine persistently while enrolled in methadone treatment in Baltimore. Abstinence-contingent employment participants received 1 year of employment-based contingency management, in which access to employment was contingent upon provision of drug-free urine samples under routine and then random drug testing. If a participant provided drug-positive urine or failed to provide a mandatory sample, then that participant received a temporary reduction in pay and could not work until urinalysis confirmed recent abstinence. Cocaine-negative urine samples at monthly assessments across 1 year of employment. During the 1 year of employment, abstinence-contingent employment participants provided significantly more cocaine-negative urine samples than employment-only participants [79.3% and 50.7%, respectively; P = 0.004, odds ratio (OR) = 3.73, 95% confidence interval (CI) = 1.60-8.69]. Conclusions Employment-based abstinence reinforcement that includes random drug testing is effective as a long-term maintenance intervention, and is among the most promising treatments for drug dependence. Work-places could serve as therapeutic agents in the treatment of drug dependence by arranging long-term employment-based contingency management programs.
Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.
2014-01-01
Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504
Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H
2014-11-01
Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
Rifai, Sami W; Urquiza Muñoz, José D; Negrón-Juárez, Robinson I; Ramírez Arévalo, Fredy R; Tello-Espinoza, Rodil; Vanderwel, Mark C; Lichstein, Jeremy W; Chambers, Jeffrey Q; Bohlman, Stephanie A
2016-10-01
Wind disturbance can create large forest blowdowns, which greatly reduces live biomass and adds uncertainty to the strength of the Amazon carbon sink. Observational studies from within the central Amazon have quantified blowdown size and estimated total mortality but have not determined which trees are most likely to die from a catastrophic wind disturbance. Also, the impact of spatial dependence upon tree mortality from wind disturbance has seldom been quantified, which is important because wind disturbance often kills clusters of trees due to large treefalls killing surrounding neighbors. We examine (1) the causes of differential mortality between adult trees from a 300-ha blowdown event in the Peruvian region of the northwestern Amazon, (2) how accounting for spatial dependence affects mortality predictions, and (3) how incorporating both differential mortality and spatial dependence affect the landscape level estimation of necromass produced from the blowdown. Standard regression and spatial regression models were used to estimate how stem diameter, wood density, elevation, and a satellite-derived disturbance metric influenced the probability of tree death from the blowdown event. The model parameters regarding tree characteristics, topography, and spatial autocorrelation of the field data were then used to determine the consequences of non-random mortality for landscape production of necromass through a simulation model. Tree mortality was highly non-random within the blowdown, where tree mortality rates were highest for trees that were large, had low wood density, and were located at high elevation. Of the differential mortality models, the non-spatial models overpredicted necromass, whereas the spatial model slightly underpredicted necromass. When parameterized from the same field data, the spatial regression model with differential mortality estimated only 7.5% more dead trees across the entire blowdown than the random mortality model, yet it estimated 51% greater necromass. We suggest that predictions of forest carbon loss from wind disturbance are sensitive to not only the underlying spatial dependence of observations, but also the biological differences between individuals that promote differential levels of mortality. © 2016 by the Ecological Society of America.
2006-09-01
Effect sizes are also shown for each randomization group (i.e., effect size from pretest to posttest ) and for the comparison of the two randomization...questions were answered. This study was designed to be a pilot study to quantify effect sizes of the effect of walking on quality of life...physical activity, body composition, and depending on inclusion criteria, estrogen metabolism. Second, this study was designed to assess the degree to
Ruscito, Ilary; Darb-Esfahani, Silvia; Kulbe, Hagen; Bellati, Filippo; Zizzari, Ilaria Grazia; Rahimi Koshkaki, Hassan; Napoletano, Chiara; Caserta, Donatella; Rughetti, Aurelia; Kessler, Mirjana; Sehouli, Jalid; Nuti, Marianna; Braicu, Elena Ioana
2018-05-10
To investigate the association of cancer stem cell biomarker aldehyde dehydrogenase-1 (ALDH1) with ovarian cancer patients' prognosis and clinico-pathological characteristics. The electronic searches were performed in January 2018 through the databases PubMed, MEDLINE and Scopus by searching the terms: "ovarian cancer" AND "immunohistochemistry" AND ["aldehyde dehydrogenase-1" OR "ALDH1" OR "cancer stem cell"]. Studies evaluating the impact of ALDH1 expression on ovarian cancer survival and clinico-pathological variables were selected. 233 studies were retrieved. Thirteen studies including 1885 patients met all selection criteria. ALDH1-high expression was found to be significantly associated with poor 5-year OS (OR = 3.46; 95% CI: 1.61-7.42; P = 0.001, random effects model) and 5-year PFS (OR = 2.14; 95% CI: 1.11-4.13; P = 0.02, random effects model) in ovarian cancer patients. No correlation between ALDH1 expression and tumor histology (OR = 0.60; 95% CI: 0.36-1.02; P = 0.06, random effects model), FIGO Stage (OR = 0.65; 95% CI: 0.33-1.30; P = 0.22, random effects model), tumor grading (OR = 0.76; 95% CI: 0.40-1.45; P = 0.41, random effects model) lymph nodal status (OR = 2.05; 95% CI: 0.81-5.18; P = 0.13, random effects model) or patients' age at diagnosis (OR = 0.83; 95% CI: 0.54-1.29; P = 0.41, fixed effects model) was identified. Basing on the available evidence, this meta-analysis showed that high levels of ALDH1 expression correlate with worse OS and PFS in ovarian cancer patients. Copyright © 2018. Published by Elsevier Inc.
Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity
Kuersteiner, Guido M.; Prucha, Ingmar R.
2013-01-01
The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n. The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT. PMID:23794781
Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.
2017-01-01
Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875
A Dynamic Bayesian Network Model for the Production and Inventory Control
NASA Astrophysics Data System (ADS)
Shin, Ji-Sun; Takazaki, Noriyuki; Lee, Tae-Hong; Kim, Jin-Il; Lee, Hee-Hyol
In general, the production quantities and delivered goods are changed randomly and then the total stock is also changed randomly. This paper deals with the production and inventory control using the Dynamic Bayesian Network. Bayesian Network is a probabilistic model which represents the qualitative dependence between two or more random variables by the graph structure, and indicates the quantitative relations between individual variables by the conditional probability. The probabilistic distribution of the total stock is calculated through the propagation of the probability on the network. Moreover, an adjusting rule of the production quantities to maintain the probability of a lower limit and a ceiling of the total stock to certain values is shown.
NASA Astrophysics Data System (ADS)
Ancey, C.; Bohorquez, P.; Heyman, J.
2015-12-01
The advection-diffusion equation is one of the most widespread equations in physics. It arises quite often in the context of sediment transport, e.g., for describing time and space variations in the particle activity (the solid volume of particles in motion per unit streambed area). Phenomenological laws are usually sufficient to derive this equation and interpret its terms. Stochastic models can also be used to derive it, with the significant advantage that they provide information on the statistical properties of particle activity. These models are quite useful when sediment transport exhibits large fluctuations (typically at low transport rates), making the measurement of mean values difficult. Among these stochastic models, the most common approach consists of random walk models. For instance, they have been used to model the random displacement of tracers in rivers. Here we explore an alternative approach, which involves monitoring the evolution of the number of particles moving within an array of cells of finite length. Birth-death Markov processes are well suited to this objective. While the topic has been explored in detail for diffusion-reaction systems, the treatment of advection has received no attention. We therefore look into the possibility of deriving the advection-diffusion equation (with a source term) within the framework of birth-death Markov processes. We show that in the continuum limit (when the cell size becomes vanishingly small), we can derive an advection-diffusion equation for particle activity. Yet while this derivation is formally valid in the continuum limit, it runs into difficulty in practical applications involving cells or meshes of finite length. Indeed, within our stochastic framework, particle advection produces nonlocal effects, which are more or less significant depending on the cell size and particle velocity. Albeit nonlocal, these effects look like (local) diffusion and add to the intrinsic particle diffusion (dispersal due to velocity fluctuations), with the important consequence that local measurements depend on both the intrinsic properties of particle displacement and the dimensions of the measurement system.
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
[Effectiveness of special stroke units in treatment of acute stroke].
Nikolaus, T; Jamour, M
2000-04-01
In Germany the implementation of specialized wards for the care of stroke patients is proposed. However, which type of organized inpatient stroke unit care is most effective and which group of patients will benefit most remains unclear. Based on the analyses of the Stroke Unit Trialists' Collaboration this paper reports results of randomized and quasi-randomized trials that compared organized inpatient (stroke unit) care with contemporary conventional care. The primary analyses examined death, dependency and institutionalization. Secondary outcome measures included patient quality of life, patient and carer satisfaction and length of stay in hospital and/or institution. The analysis of twenty trails with 3864 patients showed a reduction in the rate of deaths in the stroke unit group as compared with the control group (OR 0.83, 95% CI 0.71-0.97). The odds of death or institutionalized care were lower (OR 0.76, 95% CI 0.65-0.90) as were death or dependency (OR 0.75, 95% CI 0.65-0.87). The results were independent of patient age, sex, stroke severity, and type of stroke unit organization. Organized care in stroke units resulted in benefits for stroke patients with regard to survival, independence, and probability of living at home. However, these results refer exclusively to Anglo-American and Scandinavian trials. German stroke unit services are organized in a different way. No data about the effectiveness of the German model is yet available.
Glasner-Edwards, Suzette; Mooney, Larissa J; Ang, Alfonso; Garneau, Hélène Chokron; Hartwell, Emily; Brecht, Mary-Lynn; Rawson, Richard A
2017-02-01
In light of the known associations between stress, negative affect, and relapse, mindfulness strategies hold promise as a means of reducing relapse susceptibility. In a pilot randomized clinical trial, we evaluated the effects of Mindfulness Based Relapse Prevention (MBRP), relative to a health education control condition (HE) among stimulant dependent adults receiving contingency management. All participants received a 12-week contingency management (CM) intervention. Following a 4-week CM-only lead in phase, participants were randomly assigned to concurrently receive MBRP (n=31) or HE (n=32). Stimulant dependent adults age 18 and over. A university based clinical research center. The primary outcomes were stimulant use, measured by urine drug screens weekly during the intervention and at 1-month post-treatment, negative affect, measured by the Beck Depression Inventory and Beck Anxiety Inventory, and psychiatric severity, measured by the Addiction Severity Index. Medium effect sizes favoring MBRP were observed for negative affect and overall psychiatric severity outcomes. Depression severity changed differentially over time as a function of group, with MBRP participants reporting greater reductions through follow-up (p=0.03; Effect Size=0.58). Likewise, the MBRP group evidenced greater declines in psychiatric severity, (p=0.01; Effect Size=0.61 at follow-up). Among those with depressive and anxiety disorders, MBRP was associated with lower odds of stimulant use relative to the control condition (Odds Ratio= 0.78, p=0.03 and OR=0.68, p=0.04). MBRP effectively reduces negative affect and psychiatric impairment, and is particularly effective in reducing stimulant use among stimulant dependent adults with mood and anxiety disorders.
van Breukelen, Gerard J P; Candel, Math J J M
2018-06-10
Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Arroyo-Morales, Manuel; Olea, Nicolas; Martínez, Marin Manuel; Hidalgo-Lozano, Amparo; Ruiz-Rodríguez, Concepción; Díaz-Rodríguez, Lourdes
2008-12-01
The aim of this study was to evaluate the effect of massage on neuromuscular recruitment, mood state, and mechanical nociceptive threshold (MNT) after high-intensity exercise. This was a prospective randomized clinical trial using between-groups design. The study was conducted at a university-based sports medicine clinic. Sixty-two (62) healthy active students age 18-26 participated. Participants, randomized into two groups, performed three 30-second Wingate tests and immediately received whole-body massage-myofascial induction or placebo (sham ultrasound/magnetotherapy) treatment. The duration (40 minutes), position, and therapist were the same for both treatments. Dependent variables were surface electromyography (sEMG) of quadriceps, profile of mood states (POMS) and mechanical nociceptive threshold (MNT) of trapezius and masseter muscles. These data were assessed at baseline and after exercise and recovery periods. Generalized estimating equations models were performed on dependent variables to assess differences between groups. Significant differences were found in effects of treatment on sEMG of Vastus Medialis (VM) (p = 0.02) and vigor subscale (p = 0.04). After the recovery period, there was a significant decrease in electromyographic (EMG) activity of VM (p = 0.02) in the myofascial-release group versus a nonsignificant increase in the placebo group (p = 0.32), and a decrease in vigor (p < 0.01) in the massage group versus no change in the placebo group (p = 0.86). Massage reduces EMG amplitude and vigor when applied as a passive recovery technique after a high-intensity exercise protocol. Massage may induce a transient loss of muscle strength or a change in the muscle fiber tension-length relationship, influenced by alterations of muscle function and a psychological state of relaxation.
NASA Astrophysics Data System (ADS)
Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.
2016-04-01
Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.
Defect-induced change of temperature-dependent elastic constants in BCC iron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, N.; Setyawan, W.; Zhang, S. H.
2017-07-01
The effects of radiation-induced defects (randomly distributed vacancies, voids, and interstitial dislocation loops) on temperature-dependent elastic constants, C11, C12, and C44 in BCC iron, are studied with molecular dynamics method. The elastic constants are found to decrease with increasing temperatures for all cases containing different defects. The presence of vacancies, voids, or interstitial loops further decreases the elastic constants. For a given number of point defects, the randomly distributed vacancies show the strongest effect compared to voids or interstitial loops. All these results are expected to provide useful information to combine with experimental results for further understanding of radiation damage.
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
NASA Astrophysics Data System (ADS)
Mazzoleni, Maurizio; Cortes Arevalo, Juliette; Alfonso, Leonardo; Wehn, Uta; Norbiato, Daniele; Monego, Martina; Ferri, Michele; Solomatine, Dimitri
2017-04-01
In the past years, a number of methods have been proposed to reduce uncertainty in flood prediction by means of model updating techniques. Traditional physical observations are usually integrated into hydrological and hydraulic models to improve model performances and consequent flood predictions. Nowadays, low-cost sensors can be used for crowdsourced observations. Different type of social sensors can measure, in a more distributed way, physical variables such as precipitation and water level. However, these crowdsourced observations are not integrated into a real-time fashion into water-system models due to their varying accuracy and random spatial-temporal coverage. We assess the effect in model performance due to the assimilation of crowdsourced observations of water level. Our method consists in (1) implementing a Kalman filter into a cascade of hydrological and hydraulic models. (2) defining observation errors depending on the type of sensor either physical or social. Randomly distributed errors are based on accuracy ranges that slightly improve according to the citizens' expertise level. (3) Using a simplified social model to realistically represent citizen engagement levels based on population density and citizens' motivation scenarios. To test our method, we synthetically derive crowdsourced observations for different citizen engagement levels from a distributed network of physical and social sensors. The observations are assimilated during a particular flood event occurred in the Bacchiglione catchment, Italy. The results of this study demonstrate that sharing crowdsourced water level observations (often motivated by a feeling of belonging to a community of friends) can help in improving flood prediction. On the other hand, a growing participation of individual citizens or weather enthusiasts sharing hydrological observations in cities can help to improve model performance. This study is a first step to assess the effects of crowdsourced observations in flood model predictions. Effective communication and feedback about the quality of observations from water authorities to engaged citizens are further required to minimize their intrinsic low-variable accuracy.
Ising Critical Behavior of Inhomogeneous Curie-Weiss Models and Annealed Random Graphs
NASA Astrophysics Data System (ADS)
Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa
2016-11-01
We study the critical behavior for inhomogeneous versions of the Curie-Weiss model, where the coupling constant {J_{ij}(β)} for the edge {ij} on the complete graph is given by {J_{ij}(β)=β w_iw_j/( {sum_{kin[N]}w_k})}. We call the product form of these couplings the rank-1 inhomogeneous Curie-Weiss model. This model also arises [with inverse temperature {β} replaced by {sinh(β)} ] from the annealed Ising model on the generalized random graph. We assume that the vertex weights {(w_i)_{iin[N]}} are regular, in the sense that their empirical distribution converges and the second moment converges as well. We identify the critical temperatures and exponents for these models, as well as a non-classical limit theorem for the total spin at the critical point. These depend sensitively on the number of finite moments of the weight distribution. When the fourth moment of the weight distribution converges, then the critical behavior is the same as on the (homogeneous) Curie-Weiss model, so that the inhomogeneity is weak. When the fourth moment of the weights converges to infinity, and the weights satisfy an asymptotic power law with exponent {τ} with {τin(3,5)}, then the critical exponents depend sensitively on {τ}. In addition, at criticality, the total spin {S_N} satisfies that {S_N/N^{(τ-2)/(τ-1)}} converges in law to some limiting random variable whose distribution we explicitly characterize.
Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow.
Kerner, Boris S; Klenov, Sergey L; Schreckenberg, Michael
2011-10-01
We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules "acceleration," "deceleration," "randomization," and "motion" of the Nagel-Schreckenberg CA model as well as "overacceleration through lane changing to the faster lane," "comparison of vehicle gap with the synchronization gap," and "speed adaptation within the synchronization gap" of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.
Cooperation and charity in spatial public goods game under different strategy update rules
NASA Astrophysics Data System (ADS)
Li, Yixiao; Jin, Xiaogang; Su, Xianchuang; Kong, Fansheng; Peng, Chengbin
2010-03-01
Human cooperation can be influenced by other human behaviors and recent years have witnessed the flourishing of studying the coevolution of cooperation and punishment, yet the common behavior of charity is seldom considered in game-theoretical models. In this article, we investigate the coevolution of altruistic cooperation and egalitarian charity in spatial public goods game, by considering charity as the behavior of reducing inter-individual payoff differences. Our model is that, in each generation of the evolution, individuals play games first and accumulate payoff benefits, and then each egalitarian makes a charity donation by payoff transfer in its neighborhood. To study the individual-level evolutionary dynamics, we adopt different strategy update rules and investigate their effects on charity and cooperation. These rules can be classified into two global rules: random selection rule in which individuals randomly update strategies, and threshold selection rule where only those with payoffs below a threshold update strategies. Simulation results show that random selection enhances the cooperation level, while threshold selection lowers the threshold of the multiplication factor to maintain cooperation. When charity is considered, it is incapable in promoting cooperation under random selection, whereas it promotes cooperation under threshold selection. Interestingly, the evolution of charity strongly depends on the dispersion of payoff acquisitions of the population, which agrees with previous results. Our work may shed light on understanding human egalitarianism.
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Hierarchical model analysis of the Atlantic Flyway Breeding Waterfowl Survey
Sauer, John R.; Zimmerman, Guthrie S.; Klimstra, Jon D.; Link, William A.
2014-01-01
We used log-linear hierarchical models to analyze data from the Atlantic Flyway Breeding Waterfowl Survey. The survey has been conducted by state biologists each year since 1989 in the northeastern United States from Virginia north to New Hampshire and Vermont. Although yearly population estimates from the survey are used by the United States Fish and Wildlife Service for estimating regional waterfowl population status for mallards (Anas platyrhynchos), black ducks (Anas rubripes), wood ducks (Aix sponsa), and Canada geese (Branta canadensis), they are not routinely adjusted to control for time of day effects and other survey design issues. The hierarchical model analysis permits estimation of year effects and population change while accommodating the repeated sampling of plots and controlling for time of day effects in counting. We compared population estimates from the current stratified random sample analysis to population estimates from hierarchical models with alternative model structures that describe year to year changes as random year effects, a trend with random year effects, or year effects modeled as 1-year differences. Patterns of population change from the hierarchical model results generally were similar to the patterns described by stratified random sample estimates, but significant visibility differences occurred between twilight to midday counts in all species. Controlling for the effects of time of day resulted in larger population estimates for all species in the hierarchical model analysis relative to the stratified random sample analysis. The hierarchical models also provided a convenient means of estimating population trend as derived statistics from the analysis. We detected significant declines in mallard and American black ducks and significant increases in wood ducks and Canada geese, a trend that had not been significant for 3 of these 4 species in the prior analysis. We recommend using hierarchical models for analysis of the Atlantic Flyway Breeding Waterfowl Survey.
Health-Related Quality-of-Life Findings for the Prostate Cancer Prevention Trial
2012-01-01
Background The Prostate Cancer Prevention Trial (PCPT)—a randomized placebo-controlled study of the efficacy of finasteride in preventing prostate cancer—offered the opportunity to prospectively study effects of finasteride and other covariates on the health-related quality of life of participants in a multiyear trial. Methods We assessed three health-related quality-of-life domains (measured with the Health Survey Short Form–36: Physical Functioning, Mental Health, and Vitality scales) via questionnaires completed by PCPT participants at enrollment (3 months before randomization), at 6 months after randomization, and annually for 7 years. Covariate data obtained at enrollment from patient-completed questionnaires were included in our model. Mixed-effects model analyses and a cross-sectional presentation at three time points began at 6 months after randomization. All statistical tests were two-sided. Results For the physical function outcome (n = 16 077), neither the finasteride main effect nor the finasteride interaction with time were statistically significant. The effects of finasteride on physical function were minor and accounted for less than a 1-point difference over time in Physical Functioning scores (mixed-effect estimate = 0.07, 95% confidence interval [CI] = −0.28 to 0.42, P = .71). Comorbidities such as congestive heart failure (estimate = −5.64, 95% CI = −7.96 to −3.32, P < .001), leg pain (estimate = −2.57, 95% CI = −3.04 to −2.10, P < .001), and diabetes (estimate = −1.31, 95% CI = −2.04 to −0.57, P < .001) had statistically significant negative effects on physical function, as did current smoking (estimate = −2.34, 95% CI = −2.97 to −1.71, P < .001) and time on study (estimate = −1.20, 95% CI = −1.36 to −1.03, P < .001). Finasteride did not have a statistically significant effect on the other two dependent variables, mental health and vitality, either in the mixed-effects analyses or in the cross-sectional analysis at any of the three time points. Conclusion Finasteride did not negatively affect SF–36 Physical Functioning, Mental Health, or Vitality scores. PMID:22972968
Truck crash severity in New York city: An investigation of the spatial and the time of day effects.
Zou, Wei; Wang, Xiaokun; Zhang, Dapeng
2017-02-01
This paper investigates the differences between single-vehicle and multi-vehicle truck crashes in New York City. The random parameter models take into account the time of day effect, the heterogeneous truck weight effect and other influencing factors such as crash characteristics, driver and vehicle characteristics, built environment factors and traffic volume attributes. Based on the results from the co-location quotient analysis, a spatial generalized ordered probit model is further developed to investigate the potential spatial dependency among single-vehicle truck crashes. The sample is drawn from the state maintained incident data, the publicly available Smart Location Data, and the BEST Practices Model (BPM) data from 2008 to 2012. The result shows that there exists a substantial difference between factors influencing single-vehicle and multi-vehicle truck crash severity. It also suggests that heterogeneity does exist in the truck weight, and it behaves differently in single-vehicle and multi-vehicle truck crashes. Furthermore, individual truck crashes are proved to be spatially dependent events for both single and multi-vehicle crashes. Last but not least, significant time of day effects were found for PM and night time slots, crashes that occurred in the afternoons and at nights were less severe in single-vehicle crashes, but more severe in multi-vehicle crashes. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Thakur, Siddharth; Neal, Chris; Mehta, Yash; Sridharan, Prasanth; Jackson, Thomas; Balachandar, S.
2017-01-01
Micrsoscale simulations are being conducted for developing point-particle and other related models that are needed for the mesoscale and macroscale simulations of explosive dispersal of particles. These particle models are required to compute (a) instantaneous aerodynamic force on the particle and (b) instantaneous net heat transfer between the particle and the surrounding. A strategy for a sequence of microscale simulations has been devised that allows systematic development of the hybrid surrogate models that are applicable at conditions representative of the explosive dispersal application. The ongoing microscale simulations seek to examine particle force dependence on: (a) Mach number, (b) Reynolds number, and (c) volume fraction (different particle arrangements such as cubic, face-centered cubic (FCC), body-centered cubic (BCC) and random). Future plans include investigation of sequences of fully-resolved microscale simulations consisting of an array of particles subjected to more realistic time-dependent flows that progressively better approximate the actual problem of explosive dispersal. Additionally, effects of particle shape, size, and number in simulation as well as the transient particle deformation dependence on various parameters including: (a) particle material, (b) medium material, (c) multiple particles, (d) incoming shock pressure and speed, (e) medium to particle impedance ratio, (f) particle shape and orientation to shock, etc. are being investigated.
Random regression analyses using B-spline functions to model growth of Nellore cattle.
Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G
2012-02-01
The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.
ERIC Educational Resources Information Center
Huang, Hung-Yu; Wang, Wen-Chung
2014-01-01
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Improving the Validity of Activity of Daily Living Dependency Risk Assessment
Clark, Daniel O.; Stump, Timothy E.; Tu, Wanzhu; Miller, Douglas K.
2015-01-01
Objectives Efforts to prevent activity of daily living (ADL) dependency may be improved through models that assess older adults’ dependency risk. We evaluated whether cognition and gait speed measures improve the predictive validity of interview-based models. Method Participants were 8,095 self-respondents in the 2006 Health and Retirement Survey who were aged 65 years or over and independent in five ADLs. Incident ADL dependency was determined from the 2008 interview. Models were developed using random 2/3rd cohorts and validated in the remaining 1/3rd. Results Compared to a c-statistic of 0.79 in the best interview model, the model including cognitive measures had c-statistics of 0.82 and 0.80 while the best fitting gait speed model had c-statistics of 0.83 and 0.79 in the development and validation cohorts, respectively. Conclusion Two relatively brief models, one that requires an in-person assessment and one that does not, had excellent validity for predicting incident ADL dependency but did not significantly improve the predictive validity of the best fitting interview-based models. PMID:24652867
USDA-ARS?s Scientific Manuscript database
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises t...
Anomalous scaling in an age-dependent branching model.
Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin
2015-02-01
We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.
Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.
Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M
2009-04-03
We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.
Differential porosimetry and permeametry for random porous media.
Hilfer, R; Lemmer, A
2015-07-01
Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.
Effect of inhomogeneities on high precision measurements of cosmological distances
NASA Astrophysics Data System (ADS)
Peel, Austin; Troxel, M. A.; Ishak, Mustapha
2014-12-01
We study effects of inhomogeneities on distance measures in an exact relativistic Swiss-cheese model of the Universe, focusing on the distance modulus. The model has Λ CDM background dynamics, and the "holes" are nonsymmetric structures described by the Szekeres metric. The Szekeres exact solution of Einstein's equations, which is inhomogeneous and anisotropic, allows us to capture potentially relevant effects on light propagation due to nontrivial evolution of structures in an exact framework. Light beams traversing a single Szekeres structure in different ways can experience either magnification or demagnification, depending on the particular path. Consistent with expectations, we find a shift in the distance modulus μ to distant sources due to demagnification when the light beam travels primarily through the void regions of our model. Conversely, beams are magnified when they propagate mainly through the overdense regions of the structures, and we explore a small additional effect due to time evolution of the structures. We then study the probability distributions of Δ μ =μΛ CDM-μSC for sources at different redshifts in various Swiss-cheese constructions, where the light beams travel through a large number of randomly oriented Szekeres holes with random impact parameters. We find for Δ μ the dispersions 0.004 ≤σΔ μ≤0.008 mag for sources with redshifts 1.0 ≤z ≤1.5 , which are smaller than the intrinsic dispersion of, for example, magnitudes of type Ia supernovae. The shapes of the distributions we obtain for our Swiss-cheese constructions are peculiar in the sense that they are not consistently skewed toward the demagnification side, as they are in analyses of lensing in cosmological simulations. Depending on the source redshift, the distributions for our models can be skewed to either the demagnification or the magnification side, reflecting a limitation of these constructions. This could be the result of requiring the continuity of Einstein's equations throughout the overall spacetime patchwork, which imposes the condition that compensating overdense shells must accompany the underdense void regions in the holes. The possibility to explore other uses of these constructions that could circumvent this limitation and lead to different statistics remains open.
Cooperation and stability through periodic impulses.
Zhang, Bo-Yu; Cressman, Ross; Tao, Yi
2010-03-29
Basic games, where each individual chooses between two strategies, illustrate several issues that immediately emerge from the standard approach that applies strategic reasoning, based on rational decisions, to predict population behavior where no rationality is assumed. These include how mutual cooperation (which corresponds to the best outcome from the population perspective) can evolve when the only individually rational choice is to defect, illustrated by the Prisoner's Dilemma (PD) game, and how individuals can randomize between two strategies when neither is individually rational, illustrated by the Battle of the Sexes (BS) game that models male-female conflict over parental investment in offspring. We examine these questions from an evolutionary perspective where the evolutionary dynamics includes an impulsive effect that models sudden changes in collective population behavior. For the PD game, we show analytically that cooperation can either coexist with defection or completely take over the population, depending on the strength of the impulse. By extending these results for the PD game, we also show that males and females each evolve to a single strategy in the BS game when the impulsive effect is strong and that weak impulses stabilize the randomized strategies of this game.
Thøgersen-Ntoumani, C; Loughren, E A; Kinnafick, F-E; Taylor, I M; Duda, J L; Fox, K R
2015-12-01
Physical activity may regulate affective experiences at work, but controlled studies are needed and there has been a reliance on retrospective accounts of experience. The purpose of the present study was to examine the effect of lunchtime walks on momentary work affect at the individual and group levels. Physically inactive employees (N = 56; M age = 47.68; 92.86% female) from a large university in the UK were randomized to immediate treatment or delayed treatment (DT). The DT participants completed both a control and intervention period. During the intervention period, participants partook in three weekly 30-min lunchtime group-led walks for 10 weeks. They completed twice daily affective reports at work (morning and afternoon) using mobile phones on two randomly chosen days per week. Multilevel modeling was used to analyze the data. Lunchtime walks improved enthusiasm, relaxation, and nervousness at work, although the pattern of results differed depending on whether between-group or within-person analyses were conducted. The intervention was effective in changing some affective states and may have broader implications for public health and workplace performance. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Copula-based model for rainfall and El- Niño in Banyuwangi Indonesia
NASA Astrophysics Data System (ADS)
Caraka, R. E.; Supari; Tahmid, M.
2018-04-01
Modelling, describing and measuring the structure dependences between different random events is at the very heart of statistics. Therefore, a broad variety of varying dependence concepts has been developed in the past. Most often, practitioners rely only on the linear correlation to describe the degree of dependence between two or more variables; an approach that can lead to quite misleading conclusions as this measure is only capable of capturing linear relationships. Copulas go beyond dependence measures and provide a sound framework for general dependence modelling. This paper will introduce an application of Copula to estimate, understand, and interpret the dependence structure in a given set of data El-Niño in Banyuwangi, Indonesia. In a nutshell, we proved the flexibility of Copulas Archimedean in rainfall modelling and catching phenomena of El Niño in Banyuwangi, East Java, Indonesia. Also, it was found that SST of nino3, nino4, and nino3.4 are most appropriate ENSO indicators in identifying the relationship of El Nino and rainfall.
The estimation of branching curves in the presence of subject-specific random effects.
Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng
2014-12-20
Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
Total Dose Effects on Bipolar Integrated Circuits at Low Temperature
NASA Technical Reports Server (NTRS)
Johnston, A. H.; Swimm, R. T.; Thorbourn, D. O.
2012-01-01
Total dose damage in bipolar integrated circuits is investigated at low temperature, along with the temperature dependence of the electrical parameters of internal transistors. Bandgap narrowing causes the gain of npn transistors to decrease far more at low temperature compared to pnp transistors, due to the large difference in emitter doping concentration. When irradiations are done at temperatures of -140 deg C, no damage occurs until devices are warmed to temperatures above -50 deg C. After warm-up, subsequent cooling shows that damage is then present at low temperature. This can be explained by the very strong temperature dependence of dispersive transport in the continuous-time-random-walk model for hole transport. For linear integrated circuits, low temperature operation is affected by the strong temperature dependence of npn transistors along with the higher sensitivity of lateral and substrate pnp transistors to radiation damage.
Statistical dependency in visual scanning
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Stark, Lawrence
1986-01-01
A method to identify statistical dependencies in the positions of eye fixations is developed and applied to eye movement data from subjects who viewed dynamic displays of air traffic and judged future relative position of aircraft. Analysis of approximately 23,000 fixations on points of interest on the display identified statistical dependencies in scanning that were independent of the physical placement of the points of interest. Identification of these dependencies is inconsistent with random-sampling-based theories used to model visual search and information seeking.
Ates, Merih
2017-10-01
The present study aims to identify, whether and how supplementary grandchild care is causally related to grandparents' self-rated health (SRH). Based on longitudinal data drawn from the German Aging Survey (DEAS; 2008-2014), I compare the results of pooled OLS, pooled OLS with lagged dependant variables (POLS-LD), random and fixed effects (RE, FE) panel regression. The results show that there is a positive but small association between supplementary grandchild care and SRH in POLS, POLS-LD, and RE models. However, the fixed effects model shows that the intrapersonal change in grandchild care does not cause a change in grandparents' SRH. The FE findings indicate that supplementary grandchild care in Germany does not have a causal impact on grandparents' SRH, suggesting that models with between-variation components overestimate the influence of grandchild care on grandparents' health because they do not control for unobserved (time-constant) heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Markov model for the temporal dynamics of balanced random networks of finite size
Lagzi, Fereshteh; Rotter, Stefan
2014-01-01
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644
A Model for Pharmacological Research-Treatment of Cocaine Dependence
Montoya, Ivan D.; Hess, Judith M.; Preston, Kenzie L.; Gorelick, David A.
2008-01-01
Major problems for research on pharmacological treatments for cocaine dependence are lack of comparability of results from different treatment research programs and poor validity and/or reliability of results. Double-blind, placebo-controlled, random assignment, experimental designs, using standard intake and assessment procedures help to reduce these problems. Cessation or reduction of drug use and/or craving, retention in treatment, and medical and psychosocial improvement are some of the outcome variables collected in treatment research programs. A model to be followed across different outpatient clinical trials for pharmacological treatment of cocaine dependence is presented here. This model represents an effort to standardize data collection to make results more valid and comparable. PMID:8749725
Entanglement spectrum of random-singlet quantum critical points
NASA Astrophysics Data System (ADS)
Fagotti, Maurizio; Calabrese, Pasquale; Moore, Joel E.
2011-01-01
The entanglement spectrum (i.e., the full distribution of Schmidt eigenvalues of the reduced density matrix) contains more information than the conventional entanglement entropy and has been studied recently in several many-particle systems. We compute the disorder-averaged entanglement spectrum in the form of the disorder-averaged moments TrρAα̲ of the reduced density matrix ρA for a contiguous block of many spins at the random-singlet quantum critical point in one dimension. The result compares well in the scaling limit with numerical studies on the random XX model and is also expected to describe the (interacting) random Heisenberg model. Our numerical studies on the XX case reveal that the dependence of the entanglement entropy and spectrum on the geometry of the Hilbert space partition is quite different than for conformally invariant critical points.
NASA Astrophysics Data System (ADS)
Liu, Jian; Li, Baohe; Chen, Xiaosong
2018-02-01
The space-time coupled continuous time random walk model is a stochastic framework of anomalous diffusion with many applications in physics, geology and biology. In this manuscript the time averaged mean squared displacement and nonergodic property of a space-time coupled continuous time random walk model is studied, which is a prototype of the coupled continuous time random walk presented and researched intensively with various methods. The results in the present manuscript show that the time averaged mean squared displacements increase linearly with lag time which means ergodicity breaking occurs, besides, we find that the diffusion coefficient is intrinsically random which shows both aging and enhancement, the analysis indicates that the either aging or enhancement phenomena are determined by the competition between the correlation exponent γ and the waiting time's long-tailed index α.
Growth Modeling with Nonignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial
ERIC Educational Resources Information Center
Muthen, Bengt; Asparouhov, Tihomir; Hunter, Aimee M.; Leuchter, Andrew F.
2011-01-01
This article uses a general latent variable framework to study a series of models for nonignorable missingness due to dropout. Nonignorable missing data modeling acknowledges that missingness may depend not only on covariates and observed outcomes at previous time points as with the standard missing at random assumption, but also on latent…
Two Universality Classes for the Many-Body Localization Transition
NASA Astrophysics Data System (ADS)
Khemani, Vedika; Sheng, D. N.; Huse, David A.
2017-08-01
We provide a systematic comparison of the many-body localization (MBL) transition in spin chains with nonrandom quasiperiodic versus random fields. We find evidence suggesting that these belong to two separate universality classes: the first dominated by "intrinsic" intrasample randomness, and the second dominated by external intersample quenched randomness. We show that the effects of intersample quenched randomness are strongly growing, but not yet dominant, at the system sizes probed by exact-diagonalization studies on random models. Thus, the observed finite-size critical scaling collapses in such studies appear to be in a preasymptotic regime near the nonrandom universality class, but showing signs of the initial crossover towards the external-randomness-dominated universality class. Our results provide an explanation for why exact-diagonalization studies on random models see an apparent scaling near the transition while also obtaining finite-size scaling exponents that strongly violate Harris-Chayes bounds that apply to disorder-driven transitions. We also show that the MBL phase is more stable for the quasiperiodic model as compared to the random one, and the transition in the quasiperiodic model suffers less from certain finite-size effects.
Generalised filtering and stochastic DCM for fMRI.
Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E; Penny, Will; Hu, Dewen; Friston, Karl
2011-09-15
This paper is about the fitting or inversion of dynamic causal models (DCMs) of fMRI time series. It tries to establish the validity of stochastic DCMs that accommodate random fluctuations in hidden neuronal and physiological states. We compare and contrast deterministic and stochastic DCMs, which do and do not ignore random fluctuations or noise on hidden states. We then compare stochastic DCMs, which do and do not ignore conditional dependence between hidden states and model parameters (generalised filtering and dynamic expectation maximisation, respectively). We first characterise state-noise by comparing the log evidence of models with different a priori assumptions about its amplitude, form and smoothness. Face validity of the inversion scheme is then established using data simulated with and without state-noise to ensure that DCM can identify the parameters and model that generated the data. Finally, we address construct validity using real data from an fMRI study of internet addiction. Our analyses suggest the following. (i) The inversion of stochastic causal models is feasible, given typical fMRI data. (ii) State-noise has nontrivial amplitude and smoothness. (iii) Stochastic DCM has face validity, in the sense that Bayesian model comparison can distinguish between data that have been generated with high and low levels of physiological noise and model inversion provides veridical estimates of effective connectivity. (iv) Relaxing conditional independence assumptions can have greater construct validity, in terms of revealing group differences not disclosed by variational schemes. Finally, we note that the ability to model endogenous or random fluctuations on hidden neuronal (and physiological) states provides a new and possibly more plausible perspective on how regionally specific signals in fMRI are generated. Copyright © 2011. Published by Elsevier Inc.
Risk, Need, and Responsivity (RNR): It All Depends
ERIC Educational Resources Information Center
Taxman, Faye S.; Thanner, Meridith
2006-01-01
Target populations have always been a thorny issue for correctional programs. In this experiment of seamless treatment for probationers in two sites, offenders were randomly assigned to the seamless model (drug treatment incorporated into probation supervision) or traditional referral model to services in the community. The experiment blocked on…
An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification
Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos
2015-01-01
This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015
Encouraging the Flight of Error: Ethical Standards, Evidence Standards, and Randomized Trials
ERIC Educational Resources Information Center
Boruch, Robert
2007-01-01
Thomas Jefferson recognized the value of reason and scientific experimentation in the eighteenth century. This chapter extends the idea in contemporary ways to standards that may be used to judge the ethical propriety of randomized trials and the dependability of evidence on effects of social interventions.
Antinociceptive and pronociceptive effect of levetiracetam in tonic pain model.
Cortes-Altamirano, José Luis; Reyes-Long, Samuel; Olmos-Hernández, Adriana; Bonilla-Jaime, Herlinda; Carrillo-Mora, Paul; Bandala, Cindy; Alfaro-Rodriguez, Alfonso
2018-04-01
Levetiracetam (LEV) is a novel anticonvulsant with proven antinociceptive properties. However, the antinociceptive and pronociceptive effect of this drug has not yet been fully elucidated in a tonic pain model. Thirty-six male rats (Wistar) were randomized into six groups and underwent the formalin test as follows: rats in the control group were administered 50μL of 1% formalin in the paw; sham-group rats were administered 50μL of saline in the paw to mimick the application of formalin; the four experimental groups were administered LEV intragastrically (ig) (50, 100, 200 and 300mg/kg), and 40min later 50μL of 1% formalin was injected in the paw. LEV exhibited antinociceptive effect in the 300mg/kg LEV group (p<0.05) and a pronociceptive effect in the 100mg/kg LEV group (p<0.05) and in the 50mg/kg LEV group (p<0.001). The antinociceptive and pronociceptive effect of LEV in a tonic pain model is dose-dependent. Copyright © 2017 Institute of Pharmacology, Polish Academy of Sciences. Published by Elsevier B.V. All rights reserved.
Simulation the Effect of Internal Wave on the Acoustic Propagation
NASA Astrophysics Data System (ADS)
Ko, D. S.
2005-05-01
An acoustic radiation transport model with the Monte Carlo solution has been developed and applied to study the effect of internal wave induced random oceanic fluctuations on the deep ocean acoustic propagation. Refraction in the ocean sound channel is performed by means of bi-cubic spline interpolation of discrete deterministic ray paths in the angle(energy)-range-depth coordinates. Scattering by random internal wave fluctuations is accomplished by sampling a power law scattering kernel applying the rejection method. Results from numerical experiments show that the mean positions of acoustic rays are significantly displaced tending toward the sound channel axis due to the asymmetry of the scattering kernel. The spreading of ray depths and angles about the means depends strongly on frequency. The envelope of the ray displacement spreading is found to be proportional to the square root of range which is different from "3/2 law" found in the non-channel case. Suppression of the spreading is due to the anisotropy of fluctuations and especially due to the presence of sound channel itself.