Sample records for dependent random variables

  1. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  2. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  3. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  4. Benford's law and continuous dependent random variables

    NASA Astrophysics Data System (ADS)

    Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine

    2018-01-01

    Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.

  5. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  6. Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions

    ERIC Educational Resources Information Center

    Vuolo, Mike

    2017-01-01

    Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…

  7. Complete convergence of randomly weighted END sequences and its application.

    PubMed

    Li, Penghua; Li, Xiaoqin; Wu, Kehan

    2017-01-01

    We investigate the complete convergence of partial sums of randomly weighted extended negatively dependent (END) random variables. Some results of complete moment convergence, complete convergence and the strong law of large numbers for this dependent structure are obtained. As an application, we study the convergence of the state observers of linear-time-invariant systems. Our results extend the corresponding earlier ones.

  8. Learning dependence from samples.

    PubMed

    Seth, Sohan; Príncipe, José C

    2014-01-01

    Mutual information, conditional mutual information and interaction information have been widely used in scientific literature as measures of dependence, conditional dependence and mutual dependence. However, these concepts suffer from several computational issues; they are difficult to estimate in continuous domain, the existing regularised estimators are almost always defined only for real or vector-valued random variables, and these measures address what dependence, conditional dependence and mutual dependence imply in terms of the random variables but not finite realisations. In this paper, we address the issue that given a set of realisations in an arbitrary metric space, what characteristic makes them dependent, conditionally dependent or mutually dependent. With this novel understanding, we develop new estimators of association, conditional association and interaction association. Some attractive properties of these estimators are that they do not require choosing free parameter(s), they are computationally simpler, and they can be applied to arbitrary metric spaces.

  9. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  10. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  11. Algebraic Functions of H-Functions with Specific Dependency Structure.

    DTIC Science & Technology

    1984-05-01

    a study of its characteristic function. Such analysis is reproduced in books by Springer (17), Anderson (23), Feller (34,35), Mood and Graybill (52...following linearity property for expectations of jointly distributed random variables is derived. r 1 Theorem 1.1: If X and Y are real random variables...appear in American Journal of Mathematical and Management Science. 13. Mathai, A.M., and R.K. Saxena, "On linear combinations of stochastic variables

  12. Exploiting Data Missingness in Bayesian Network Modeling

    NASA Astrophysics Data System (ADS)

    Rodrigues de Morais, Sérgio; Aussem, Alex

    This paper proposes a framework built on the use of Bayesian networks (BN) for representing statistical dependencies between the existing random variables and additional dummy boolean variables, which represent the presence/absence of the respective random variable value. We show how augmenting the BN with these additional variables helps pinpoint the mechanism through which missing data contributes to the classification task. The missing data mechanism is thus explicitly taken into account to predict the class variable using the data at hand. Extensive experiments on synthetic and real-world incomplete data sets reveals that the missingness information improves classification accuracy.

  13. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  14. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  15. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  16. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    ERIC Educational Resources Information Center

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  17. Nonrecurrence and Bell-like inequalities

    NASA Astrophysics Data System (ADS)

    Danforth, Douglas G.

    2017-12-01

    The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.

  18. Non-manipulation quantitative designs.

    PubMed

    Rumrill, Phillip D

    2004-01-01

    The article describes non-manipulation quantitative designs of two types, correlational and causal comparative studies. Both of these designs are characterized by the absence of random assignment of research participants to conditions or groups and non-manipulation of the independent variable. Without random selection or manipulation of the independent variable, no attempt is made to draw causal inferences regarding relationships between independent and dependent variables. Nonetheless, non-manipulation studies play an important role in rehabilitation research, as described in this article. Examples from the contemporary rehabilitation literature are included. Copyright 2004 IOS Press

  19. The relationships of 'ecstasy' (MDMA) and cannabis use to impaired executive inhibition and access to semantic long-term memory.

    PubMed

    Murphy, Philip N; Erwin, Philip G; Maciver, Linda; Fisk, John E; Larkin, Derek; Wareing, Michelle; Montgomery, Catharine; Hilton, Joanne; Tames, Frank J; Bradley, Belinda; Yanulevitch, Kate; Ralley, Richard

    2011-10-01

    This study aimed to examine the relationship between the consumption of ecstasy (3,4-methylenedioxymethamphetamine (MDMA)) and cannabis, and performance on the random letter generation task which generates dependent variables drawing upon executive inhibition and access to semantic long-term memory (LTM). The participant group was a between-participant independent variable with users of both ecstasy and cannabis (E/C group, n = 15), users of cannabis but not ecstasy (CA group, n = 13) and controls with no exposure to these drugs (CO group, n = 12). Dependent variables measured violations of randomness: number of repeat sequences, number of alphabetical sequences (both drawing upon inhibition) and redundancy (drawing upon access to semantic LTM). E/C participants showed significantly higher redundancy than CO participants but did not differ from CA participants. There were no significant effects for the other dependent variables. A regression model comprising intelligence measures and estimates of ecstasy and cannabis consumption predicted redundancy scores, but only cannabis consumption contributed significantly to this prediction. Impaired access to semantic LTM may be related to cannabis consumption, although the involvement of ecstasy and other stimulant drugs cannot be excluded here. Executive inhibitory functioning, as measured by the random letter generation task, is unrelated to ecstasy and cannabis consumption. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.

    ERIC Educational Resources Information Center

    Olson, Jeffery E.

    Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…

  1. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES

    PubMed Central

    van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.

    2014-01-01

    SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298

  3. [Study on correction of data bias caused by different missing mechanisms in survey of medical expenditure among students enrolling in Urban Resident Basic Medical Insurance].

    PubMed

    Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong

    2015-05-01

    The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.

  4. Baseline-dependent effect of noise-enhanced insoles on gait variability in healthy elderly walkers.

    PubMed

    Stephen, Damian G; Wilcox, Bethany J; Niemi, James B; Franz, Jason R; Franz, Jason; Kerrigan, Dr; Kerrigan, D Casey; D'Andrea, Susan E

    2012-07-01

    The purpose of this study was to determine whether providing subsensory stochastic-resonance mechanical vibration to the foot soles of elderly walkers could decrease gait variability. In a randomized double-blind controlled trial, 29 subjects engaged in treadmill walking while wearing sandals customized with three actuators capable of producing stochastic-resonance mechanical vibration embedded in each sole. For each subject, we determined a subsensory level of vibration stimulation. After a 5-min acclimation period of walking with the footwear, subjects were asked to walk on the treadmill for six trials, each 30s long. Trials were pair-wise random: in three trials, actuators provided subsensory vibration; in the other trials, they did not. Subjects wore reflective markers to track body motion. Stochastic-resonance mechanical stimulation exhibited baseline-dependent effects on spatial stride-to-stride variability in gait, slightly increasing variability in subjects with least baseline variability and providing greater reductions in variability for subjects with greater baseline variability (p<.001). Thus, applying stochastic-resonance mechanical vibrations on the plantar surface of the foot reduces gait variability for subjects with more variable gait. Stochastic-resonance mechanical vibrations may provide an effective intervention for preventing falls in healthy elderly walkers. Published by Elsevier B.V.

  5. Random Effects: Variance Is the Spice of Life.

    PubMed

    Jupiter, Daniel C

    Covariates in regression analyses allow us to understand how independent variables of interest impact our dependent outcome variable. Often, we consider fixed effects covariates (e.g., gender or diabetes status) for which we examine subjects at each value of the covariate. We examine both men and women and, within each gender, examine both diabetic and nondiabetic patients. Occasionally, however, we consider random effects covariates for which we do not examine subjects at every value. For example, we examine patients from only a sample of hospitals and, within each hospital, examine both diabetic and nondiabetic patients. The random sampling of hospitals is in contrast to the complete coverage of all genders. In this column I explore the differences in meaning and analysis when thinking about fixed and random effects variables. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Computer-Assisted Dieting: Effects of a Randomized Nutrition Intervention

    ERIC Educational Resources Information Center

    Schroder, Kerstin E. E.

    2011-01-01

    Objectives: To compare the effects of a computer-assisted dieting intervention (CAD) with and without self-management training on dieting among 55 overweight and obese adults. Methods: Random assignment to a single-session nutrition intervention (CAD-only) or a combined CAD plus self-management group intervention (CADG). Dependent variables were…

  7. Random walk in nonhomogeneous environments: A possible approach to human and animal mobility

    NASA Astrophysics Data System (ADS)

    Srokowski, Tomasz

    2017-03-01

    The random walk process in a nonhomogeneous medium, characterized by a Lévy stable distribution of jump length, is discussed. The width depends on a position: either before the jump or after that. In the latter case, the density slope is affected by the variable width and the variance may be finite; then all kinds of the anomalous diffusion are predicted. In the former case, only the time characteristics are sensitive to the variable width. The corresponding Langevin equation with different interpretations of the multiplicative noise is discussed. The dependence of the distribution width on position after jump is interpreted in terms of cognitive abilities and related to such problems as migration in a human population and foraging habits of animals.

  8. Dynamic Quantum Allocation and Swap-Time Variability in Time-Sharing Operating Systems.

    ERIC Educational Resources Information Center

    Bhat, U. Narayan; Nance, Richard E.

    The effects of dynamic quantum allocation and swap-time variability on central processing unit (CPU) behavior are investigated using a model that allows both quantum length and swap-time to be state-dependent random variables. Effective CPU utilization is defined to be the proportion of a CPU busy period that is devoted to program processing, i.e.…

  9. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  10. Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport

    NASA Astrophysics Data System (ADS)

    Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike

    2017-04-01

    Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.

  11. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids

    PubMed Central

    Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229

  12. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids.

    PubMed

    Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.

  13. A Dynamic Bayesian Network Model for the Production and Inventory Control

    NASA Astrophysics Data System (ADS)

    Shin, Ji-Sun; Takazaki, Noriyuki; Lee, Tae-Hong; Kim, Jin-Il; Lee, Hee-Hyol

    In general, the production quantities and delivered goods are changed randomly and then the total stock is also changed randomly. This paper deals with the production and inventory control using the Dynamic Bayesian Network. Bayesian Network is a probabilistic model which represents the qualitative dependence between two or more random variables by the graph structure, and indicates the quantitative relations between individual variables by the conditional probability. The probabilistic distribution of the total stock is calculated through the propagation of the probability on the network. Moreover, an adjusting rule of the production quantities to maintain the probability of a lower limit and a ceiling of the total stock to certain values is shown.

  14. Sensitivity Analysis for Multivalued Treatment Effects: An Example of a Cross-Country Study of Teacher Participation and Job Satisfaction

    ERIC Educational Resources Information Center

    Chang, Chi

    2015-01-01

    It is known that interventions are hard to assign randomly to subjects in social psychological studies, because randomized control is difficult to implement strictly and precisely. Thus, in nonexperimental studies and observational studies, controlling the impact of covariates on the dependent variables and addressing the robustness of the…

  15. Enhancing Multimedia Imbalanced Concept Detection Using VIMP in Random Forests.

    PubMed

    Sadiq, Saad; Yan, Yilin; Shyu, Mei-Ling; Chen, Shu-Ching; Ishwaran, Hemant

    2016-07-01

    Recent developments in social media and cloud storage lead to an exponential growth in the amount of multimedia data, which increases the complexity of managing, storing, indexing, and retrieving information from such big data. Many current content-based concept detection approaches lag from successfully bridging the semantic gap. To solve this problem, a multi-stage random forest framework is proposed to generate predictor variables based on multivariate regressions using variable importance (VIMP). By fine tuning the forests and significantly reducing the predictor variables, the concept detection scores are evaluated when the concept of interest is rare and imbalanced, i.e., having little collaboration with other high level concepts. Using classical multivariate statistics, estimating the value of one coordinate using other coordinates standardizes the covariates and it depends upon the variance of the correlations instead of the mean. Thus, conditional dependence on the data being normally distributed is eliminated. Experimental results demonstrate that the proposed framework outperforms those approaches in the comparison in terms of the Mean Average Precision (MAP) values.

  16. [Gene method for inconsistent hydrological frequency calculation. I: Inheritance, variability and evolution principles of hydrological genes].

    PubMed

    Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie

    2018-04-01

    A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.

  17. Resistance controllability and variability improvement in a TaO{sub x}-based resistive memory for multilevel storage application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, A., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr; Song, J.; Hwang, H., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr

    In order to obtain reliable multilevel cell (MLC) characteristics, resistance controllability between the different resistance levels is required especially in resistive random access memory (RRAM), which is prone to resistance variability mainly due to its intrinsic random nature of defect generation and filament formation. In this study, we have thoroughly investigated the multilevel resistance variability in a TaO{sub x}-based nanoscale (<30 nm) RRAM operated in MLC mode. It is found that the resistance variability not only depends on the conductive filament size but also is a strong function of oxygen vacancy concentration in it. Based on the gained insights through experimentalmore » observations and simulation, it is suggested that forming thinner but denser conductive filament may greatly improve the temporal resistance variability even at low operation current despite the inherent stochastic nature of resistance switching process.« less

  18. Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi

    2015-07-01

    The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.

  19. Comparison of Random Forest and Parametric Imputation Models for Imputing Missing Data Using MICE: A CALIBER Study

    PubMed Central

    Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-01-01

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914

  20. Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.

    PubMed

    Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-03-15

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.

  1. Testing the effectiveness of in-home behavioral economics strategies to increase vegetable intake, liking, and variety among children residing in households that receive food assistance.

    PubMed

    Leak, Tashara M; Swenson, Alison; Vickers, Zata; Mann, Traci; Mykerezi, Elton; Redden, Joseph P; Rendahl, Aaron; Reicks, Marla

    2015-01-01

    To test the effectiveness of behavioral economics strategies for increasing vegetable intake, variety, and liking among children residing in homes receiving food assistance. A randomized controlled trial with data collected at baseline, once weekly for 6 weeks, and at study conclusion. Family homes. Families with a child (9-12 years) will be recruited through community organizations and randomly assigned to an intervention (n = 36) or control (n = 10) group. The intervention group will incorporate a new behavioral economics strategy during home dinner meal occasions each week for 6 weeks. Strategies are simple and low-cost. The primary dependent variable will be child's dinner meal vegetable consumption based on weekly reports by caregivers. Fixed independent variables will include the strategy and week of strategy implementation. Secondary dependent variables will include vegetable liking and variety of vegetables consumed based on data collected at baseline and study conclusion. Mean vegetable intake for each strategy across families will be compared using a mixed-model analysis of variance with a random effect for child. In additionally, overall mean changes in vegetable consumption, variety, and liking will be compared between intervention and control groups. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  2. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  3. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  4. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  5. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  6. Human choice among five alternatives when reinforcers decay.

    PubMed

    Rothstein, Jacob B; Jensen, Greg; Neuringer, Allen

    2008-06-01

    Human participants played a computer game in which choices among five alternatives were concurrently reinforced according to dependent random-ratio schedules. "Dependent" indicates that choices to any of the wedges activated the random-number generators governing reinforcers on all five alternatives. Two conditions were compared. In the hold condition, once scheduled, a reinforcer - worth a constant five points - remained available until it was collected. In the decay condition, point values decreased with intervening responses, i.e., rapid collection was differentially reinforced. Slopes of matching functions were higher in the decay than hold condition. However inter-subject variability was high in both conditions.

  7. Measuring monotony in two-dimensional samples

    NASA Astrophysics Data System (ADS)

    Kachapova, Farida; Kachapov, Ilias

    2010-04-01

    This note introduces a monotony coefficient as a new measure of the monotone dependence in a two-dimensional sample. Some properties of this measure are derived. In particular, it is shown that the absolute value of the monotony coefficient for a two-dimensional sample is between |r| and 1, where r is the Pearson's correlation coefficient for the sample; that the monotony coefficient equals 1 for any monotone increasing sample and equals -1 for any monotone decreasing sample. This article contains a few examples demonstrating that the monotony coefficient is a more accurate measure of the degree of monotone dependence for a non-linear relationship than the Pearson's, Spearman's and Kendall's correlation coefficients. The monotony coefficient is a tool that can be applied to samples in order to find dependencies between random variables; it is especially useful in finding couples of dependent variables in a big dataset of many variables. Undergraduate students in mathematics and science would benefit from learning and applying this measure of monotone dependence.

  8. Fatigue reliability of steel highway bridge details.

    DOT National Transportation Integrated Search

    2001-08-01

    The expected life of a steel highway bridge subjected to random, variable-amplitude traffic cycles is highly dependent on damage accumulation caused by various fatigue mechanisms. This study addressed some of the issues associated with developing pro...

  9. Encoding dependence in Bayesian causal networks

    USDA-ARS?s Scientific Manuscript database

    Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...

  10. Selective Influence through Conditional Independence.

    ERIC Educational Resources Information Center

    Dzhafarov, Ehtibar N.

    2003-01-01

    Presents a generalization and improvement for the definition proposed by E. Dzhafarov (2001) for selectiveness in the dependence of several random variables on several (sets of) external factors. This generalization links the notion of selective influence with that of conditional independence. (SLD)

  11. Estimating the signal-to-noise ratio of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Curran, Paul J.; Dungan, Jennifer L.

    1988-01-01

    To make the best use of narrowband airborne visible/infrared imaging spectrometer (AVIRIS) data, an investigator needs to know the ratio of signal to random variability or noise (signal-to-noise ratio or SNR). The signal is land cover dependent and varies with both wavelength and atmospheric absorption; random noise comprises sensor noise and intrapixel variability (i.e., variability within a pixel). The three existing methods for estimating the SNR are inadequate, since typical laboratory methods inflate while dark current and image methods deflate the SNR. A new procedure is proposed called the geostatistical method. It is based on the removal of periodic noise by notch filtering in the frequency domain and the isolation of sensor noise and intrapixel variability using the semi-variogram. This procedure was applied easily and successfully to five sets of AVIRIS data from the 1987 flying season and could be applied to remotely sensed data from broadband sensors.

  12. Continuous-Time Random Walk with multi-step memory: an application to market dynamics

    NASA Astrophysics Data System (ADS)

    Gubiec, Tomasz; Kutner, Ryszard

    2017-11-01

    An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  13. Uncertainty Quantification in Scale-Dependent Models of Flow in Porous Media: SCALE-DEPENDENT UQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, A. M.; Panzeri, M.; Tartakovsky, G. D.

    Equations governing flow and transport in heterogeneous porous media are scale-dependent. We demonstrate that it is possible to identify a support scalemore » $$\\eta^*$$, such that the typically employed approximate formulations of Moment Equations (ME) yield accurate (statistical) moments of a target environmental state variable. Under these circumstances, the ME approach can be used as an alternative to the Monte Carlo (MC) method for Uncertainty Quantification in diverse fields of Earth and environmental sciences. MEs are directly satisfied by the leading moments of the quantities of interest and are defined on the same support scale as the governing stochastic partial differential equations (PDEs). Computable approximations of the otherwise exact MEs can be obtained through perturbation expansion of moments of the state variables in orders of the standard deviation of the random model parameters. As such, their convergence is guaranteed only for the standard deviation smaller than one. We demonstrate our approach in the context of steady-state groundwater flow in a porous medium with a spatially random hydraulic conductivity.« less

  14. Compensation for Lithography Induced Process Variations during Physical Design

    NASA Astrophysics Data System (ADS)

    Chin, Eric Yiow-Bing

    This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.

  15. Lossy Information Exchange and Instantaneous Communications

    DTIC Science & Technology

    2015-09-17

    Renyi  maximal  correlation.  In...1959,   Renyi  proposed  a  measure  of  the  dependence   between  two  random  variables,  by  finding  a  pair  of...one  variable   corresponds  to  the  user,  the  other  to  the  service.  Finding  the   Renyi  correlation  in  

  16. Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading

    NASA Technical Reports Server (NTRS)

    Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.

    1998-01-01

    The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.

  17. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta-analysis and group level studies.

    PubMed

    Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan

    2016-07-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  18. Solution of the finite Milne problem in stochastic media with RVT Technique

    NASA Astrophysics Data System (ADS)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  19. Rupture Propagation for Stochastic Fault Models

    NASA Astrophysics Data System (ADS)

    Favreau, P.; Lavallee, D.; Archuleta, R.

    2003-12-01

    The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.

  20. A comparison of three random effects approaches to analyze repeated bounded outcome scores with an application in a stroke revalidation study.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2008-12-30

    Discrete bounded outcome scores (BOS), i.e. discrete measurements that are restricted on a finite interval, often occur in practice. Examples are compliance measures, quality of life measures, etc. In this paper we examine three related random effects approaches to analyze longitudinal studies with a BOS as response: (1) a linear mixed effects (LM) model applied to a logistic transformed modified BOS; (2) a model assuming that the discrete BOS is a coarsened version of a latent random variable, which after a logistic-normal transformation, satisfies an LM model; and (3) a random effects probit model. We consider also the extension whereby the variability of the BOS is allowed to depend on covariates. The methods are contrasted using a simulation study and on a longitudinal project, which documents stroke rehabilitation in four European countries using measures of motor and functional recovery. Copyright 2008 John Wiley & Sons, Ltd.

  1. Effect of randomness in logistic maps

    NASA Astrophysics Data System (ADS)

    Khaleque, Abdul; Sen, Parongama

    2015-01-01

    We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However , averaged over different realizations reaches a fixed point. For 1 ≤ at ≤ 4, the system shows nonchaotic behavior and the Lyapunov exponent is strongly dependent on the asymmetry of the distribution from which at is drawn. Chaotic behavior is seen to occur beyond a threshold value of q1(q2) when q2(q1) is varied. The most striking result is that the random map is chaotic even when q2 is less than the threshold value 3.5699⋯ at which chaos occurs in the nonrandom map. We also employ a different method in which a different set of random variables are used for the evolution of two initially identical x values, here the chaotic regime exists for all q1 ≠ q2 values.

  2. Measures and models for angular correlation and angular-linear correlation. [correlation of random variables

    NASA Technical Reports Server (NTRS)

    Johnson, R. A.; Wehrly, T.

    1976-01-01

    Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.

  3. Bounds for the price of discrete arithmetic Asian options

    NASA Astrophysics Data System (ADS)

    Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.

    2006-01-01

    In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.

  4. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  5. Multivariate Longitudinal Analysis with Bivariate Correlation Test

    PubMed Central

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692

  6. Multivariate Longitudinal Analysis with Bivariate Correlation Test.

    PubMed

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.

  7. Missing data exploration: highlighting graphical presentation of missing pattern.

    PubMed

    Zhang, Zhongheng

    2015-12-01

    Functions shipped with R base can fulfill many tasks of missing data handling. However, because the data volume of electronic medical record (EMR) system is always very large, more sophisticated methods may be helpful in data management. The article focuses on missing data handling by using advanced techniques. There are three types of missing data, that is, missing completely at random (MCAR), missing at random (MAR) and not missing at random (NMAR). This classification system depends on how missing values are generated. Two packages, Multivariate Imputation by Chained Equations (MICE) and Visualization and Imputation of Missing Values (VIM), provide sophisticated functions to explore missing data pattern. In particular, the VIM package is especially helpful in visual inspection of missing data. Finally, correlation analysis provides information on the dependence of missing data on other variables. Such information is useful in subsequent imputations.

  8. Global solutions to random 3D vorticity equations for small initial data

    NASA Astrophysics Data System (ADS)

    Barbu, Viorel; Röckner, Michael

    2017-11-01

    One proves the existence and uniqueness in (Lp (R3)) 3, 3/2 < p < 2, of a global mild solution to random vorticity equations associated to stochastic 3D Navier-Stokes equations with linear multiplicative Gaussian noise of convolution type, for sufficiently small initial vorticity. This resembles some earlier deterministic results of T. Kato [16] and are obtained by treating the equation in vorticity form and reducing the latter to a random nonlinear parabolic equation. The solution has maximal regularity in the spatial variables and is weakly continuous in (L3 ∩L 3p/4p - 6)3 with respect to the time variable. Furthermore, we obtain the pathwise continuous dependence of solutions with respect to the initial data. In particular, one gets a locally unique solution of 3D stochastic Navier-Stokes equation in vorticity form up to some explosion stopping time τ adapted to the Brownian motion.

  9. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  10. Integrated Logistics Support Analysis of the International Space Station Alpha, Background and Summary of Mathematical Modeling and Failure Density Distributions Pertaining to Maintenance Time Dependent Parameters

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.

  11. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.

  12. Multivariate non-normally distributed random variables in climate research - introduction to the copula approach

    NASA Astrophysics Data System (ADS)

    Schölzel, C.; Friederichs, P.

    2008-10-01

    Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF) tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.

  13. Estimation of gloss from rough surface parameters

    NASA Astrophysics Data System (ADS)

    Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin

    2005-12-01

    Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.

  14. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  15. Scaling exponents for ordered maxima

    DOE PAGES

    Ben-Naim, E.; Krapivsky, P. L.; Lemons, N. W.

    2015-12-22

    We study extreme value statistics of multiple sequences of random variables. For each sequence with N variables, independently drawn from the same distribution, the running maximum is defined as the largest variable to date. We compare the running maxima of m independent sequences and investigate the probability S N that the maxima are perfectly ordered, that is, the running maximum of the first sequence is always larger than that of the second sequence, which is always larger than the running maximum of the third sequence, and so on. The probability S N is universal: it does not depend on themore » distribution from which the random variables are drawn. For two sequences, S N~N –1/2, and in general, the decay is algebraic, S N~N –σm, for large N. We analytically obtain the exponent σ 3≅1.302931 as root of a transcendental equation. Moreover, the exponents σ m grow with m, and we show that σ m~m for large m.« less

  16. The intrinsic dependence structure of peak, volume, duration, and average intensity of hyetographs and hydrographs

    NASA Astrophysics Data System (ADS)

    Serinaldi, Francesco; Kilsby, Chris G.

    2013-06-01

    The information contained in hyetographs and hydrographs is often synthesized by using key properties such as the peak or maximum value Xp, volume V, duration D, and average intensity I. These variables play a fundamental role in hydrologic engineering as they are used, for instance, to define design hyetographs and hydrographs as well as to model and simulate the rainfall and streamflow processes. Given their inherent variability and the empirical evidence of the presence of a significant degree of association, such quantities have been studied as correlated random variables suitable to be modeled by multivariate joint distribution functions. The advent of copulas in geosciences simplified the inference procedures allowing for splitting the analysis of the marginal distributions and the study of the so-called dependence structure or copula. However, the attention paid to the modeling task has overlooked a more thorough study of the true nature and origin of the relationships that link Xp,V,D, and I. In this study, we apply a set of ad hoc bootstrap algorithms to investigate these aspects by analyzing the hyetographs and hydrographs extracted from 282 daily rainfall series from central eastern Europe, three 5 min rainfall series from central Italy, 80 daily streamflow series from the continental United States, and two sets of 200 simulated universal multifractal time series. Our results show that all the pairwise dependence structures between Xp,V,D, and I exhibit some key properties that can be reproduced by simple bootstrap algorithms that rely on a standard univariate resampling without resort to multivariate techniques. Therefore, the strong similarities between the observed dependence structures and the agreement between the observed and bootstrap samples suggest the existence of a numerical generating mechanism based on the superposition of the effects of sampling data at finite time steps and the process of summing realizations of independent random variables over random durations. We also show that the pairwise dependence structures are weakly dependent on the internal patterns of the hyetographs and hydrographs, meaning that the temporal evolution of the rainfall and runoff events marginally influences the mutual relationships of Xp,V,D, and I. Finally, our findings point out that subtle and often overlooked deterministic relationships between the properties of the event hyetographs and hydrographs exist. Confusing these relationships with genuine stochastic relationships can lead to an incorrect application of multivariate distributions and copulas and to misleading results.

  17. Finite-Size Scaling Analysis of Binary Stochastic Processes and Universality Classes of Information Cascade Phase Transition

    NASA Astrophysics Data System (ADS)

    Mori, Shintaro; Hisakado, Masato

    2015-05-01

    We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.

  18. Attention Measures of Accuracy, Variability, and Fatigue Detect Early Response to Donepezil in Alzheimer's Disease: A Randomized, Double-blind, Placebo-Controlled Pilot Trial.

    PubMed

    Vila-Castelar, Clara; Ly, Jenny J; Kaplan, Lillian; Van Dyk, Kathleen; Berger, Jeffrey T; Macina, Lucy O; Stewart, Jennifer L; Foldi, Nancy S

    2018-04-09

    Donepezil is widely used to treat Alzheimer's disease (AD), but detecting early response remains challenging for clinicians. Acetylcholine is known to directly modulate attention, particularly under high cognitive conditions, but no studies to date test whether measures of attention under high load can detect early effects of donepezil. We hypothesized that load-dependent attention tasks are sensitive to short-term treatment effects of donepezil, while global and other domain-specific cognitive measures are not. This longitudinal, randomized, double-blind, placebo-controlled pilot trial (ClinicalTrials.gov Identifier: NCT03073876) evaluated 23 participants newly diagnosed with AD initiating de novo donepezil treatment (5 mg). After baseline assessment, participants were randomized into Drug (n = 12) or Placebo (n = 11) groups, and retested after approximately 6 weeks. Cognitive assessment included: (a) attention tasks (Foreperiod Effect, Attentional Blink, and Covert Orienting tasks) measuring processing speed, top-down accuracy, orienting, intra-individual variability, and fatigue; (b) global measures (Alzheimer's Disease Assessment Scale-Cognitive Subscale, Mini-Mental Status Examination, Dementia Rating Scale); and (c) domain-specific measures (memory, language, visuospatial, and executive function). The Drug but not the Placebo group showed benefits of treatment at high-load measures by preserving top-down accuracy, improving intra-individual variability, and averting fatigue. In contrast, other global or cognitive domain-specific measures could not detect treatment effects over the same treatment interval. The pilot-study suggests that attention measures targeting accuracy, variability, and fatigue under high-load conditions could be sensitive to short-term cholinergic treatment. Given the central role of acetylcholine in attentional function, load-dependent attentional measures may be valuable cognitive markers of early treatment response.

  19. Sample-to-sample fluctuations of power spectrum of a random motion in a periodic Sinai model.

    PubMed

    Dean, David S; Iorio, Antonio; Marinari, Enzo; Oshanin, Gleb

    2016-09-01

    The Sinai model of a tracer diffusing in a quenched Brownian potential is a much-studied problem exhibiting a logarithmically slow anomalous diffusion due to the growth of energy barriers with the system size. However, if the potential is random but periodic, the regime of anomalous diffusion crosses over to one of normal diffusion once a tracer has diffused over a few periods of the system. Here we consider a system in which the potential is given by a Brownian bridge on a finite interval (0,L) and then periodically repeated over the whole real line and study the power spectrum S(f) of the diffusive process x(t) in such a potential. We show that for most of realizations of x(t) in a given realization of the potential, the low-frequency behavior is S(f)∼A/f^{2}, i.e., the same as for standard Brownian motion, and the amplitude A is a disorder-dependent random variable with a finite support. Focusing on the statistical properties of this random variable, we determine the moments of A of arbitrary, negative, or positive order k and demonstrate that they exhibit a multifractal dependence on k and a rather unusual dependence on the temperature and on the periodicity L, which are supported by atypical realizations of the periodic disorder. We finally show that the distribution of A has a log-normal left tail and exhibits an essential singularity close to the right edge of the support, which is related to the Lifshitz singularity. Our findings are based both on analytic results and on extensive numerical simulations of the process x(t).

  20. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  1. Empirically Driven Variable Selection for the Estimation of Causal Effects with Observational Data

    ERIC Educational Resources Information Center

    Keller, Bryan; Chen, Jianshen

    2016-01-01

    Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…

  2. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    ERIC Educational Resources Information Center

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  3. Homophobia in Registered Nurses: Impact on LGB Youth

    ERIC Educational Resources Information Center

    Blackwell, Christopher W.; Kiehl, Ermalynn M.

    2008-01-01

    This study examined registered nurses' overall attitudes and homophobia towards gays and lesbians in the workplace. Homophobia scores, represented by the Attitudes Toward Lesbians and Gay Men (ATLG) Scale, was the dependent variable. Overall homophobia scores were assessed among a randomized stratified sample of registered nurses licensed in the…

  4. Missing data exploration: highlighting graphical presentation of missing pattern

    PubMed Central

    2015-01-01

    Functions shipped with R base can fulfill many tasks of missing data handling. However, because the data volume of electronic medical record (EMR) system is always very large, more sophisticated methods may be helpful in data management. The article focuses on missing data handling by using advanced techniques. There are three types of missing data, that is, missing completely at random (MCAR), missing at random (MAR) and not missing at random (NMAR). This classification system depends on how missing values are generated. Two packages, Multivariate Imputation by Chained Equations (MICE) and Visualization and Imputation of Missing Values (VIM), provide sophisticated functions to explore missing data pattern. In particular, the VIM package is especially helpful in visual inspection of missing data. Finally, correlation analysis provides information on the dependence of missing data on other variables. Such information is useful in subsequent imputations. PMID:26807411

  5. What weather variables are important in predicting heat-related mortality? A new application of statistical learning methods

    PubMed Central

    Zhang, Kai; Li, Yun; Schwartz, Joel D.; O'Neill, Marie S.

    2014-01-01

    Hot weather increases risk of mortality. Previous studies used different sets of weather variables to characterize heat stress, resulting in variation in heat-mortality- associations depending on the metric used. We employed a statistical learning method – random forests – to examine which of various weather variables had the greatest impact on heat-related mortality. We compiled a summertime daily weather and mortality counts dataset from four U.S. cities (Chicago, IL; Detroit, MI; Philadelphia, PA; and Phoenix, AZ) from 1998 to 2006. A variety of weather variables were ranked in predicting deviation from typical daily all-cause and cause-specific death counts. Ranks of weather variables varied with city and health outcome. Apparent temperature appeared to be the most important predictor of heat-related mortality for all-cause mortality. Absolute humidity was, on average, most frequently selected one of the top variables for all-cause mortality and seven cause-specific mortality categories. Our analysis affirms that apparent temperature is a reasonable variable for activating heat alerts and warnings, which are commonly based on predictions of total mortality in next few days. Additionally, absolute humidity should be included in future heat-health studies. Finally, random forests can be used to guide choice of weather variables in heat epidemiology studies. PMID:24834832

  6. Survival curve estimation with dependent left truncated data using Cox's model.

    PubMed

    Mackenzie, Todd

    2012-10-19

    The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.

  7. Retrocausation Or Extant Indefinite Reality?

    NASA Astrophysics Data System (ADS)

    Houtkooper, Joop M.

    2006-10-01

    The possibility of retrocausation has been considered to explain the occurrence of anomalous phenomena in which the ostensible effects are preceded by their causes. A scrutiny of both experimental methodology and the experimental data is called for. A review of experimental data reveals the existence of such effects to be a serious possibility. The experimental methodology entails some conceptual difficulties, these depending on the underlying assumptions about the effects. A major point is an ambiguity between anomalous acquisition of information and retrocausation in exerted influences. A unifying theory has been proposed, based upon the fundamental randomness of quantum mechanics. Quantum mechanical randomness may be regarded as a tenacious phenomenon, that apparently is only resolved by the human observer of the random variable in question. This has led to the "observational theory" of anomalous phenomena, which is based upon the assumption that the preference of a motivated observer is able to interact with the extant indefinite random variable that is being observed. This observational theory has led to a novel prediction, which has been corroborated in experiments. Moreover, different classes of anomalous phenomena can be explained by the same basic mechanism. This foregoes retroactive causation, but, instead, requires that macroscopic physical variables remain in a state of indefinite reality and thus remain influenceable by mental efforts until these are observed. More work is needed to discover the relevant psychological and neurophysiological variables involved in effective motivated observation. Besides these practicalities, the fundamentals still have some interesting loose ends.

  8. The living Drake equation of the Tau Zero Foundation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-03-01

    The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.

  9. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.

    PubMed

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2013-02-01

    In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.

  10. Randomized controlled trial of a cognitive-behavioral therapy for at-risk Korean male adolescents.

    PubMed

    Hyun, Myung-Sun; Nam, Kyoung A; Kim, Myung-Ah

    2010-06-01

    This study examined the effects of cognitive behavioral therapy (CBT) aimed at enhancing the resilience of high-risk adolescents with alcohol-dependent parents in Suwon, South Korea. The study used a randomized control group pretest and posttest design. The experimental group participated in 10 sessions of CBT, and the scores on resilience increased significantly after the intervention, whereas the scores of self-concept and depression did not change. In the control group, none of the scores of outcome variables changed significantly after the intervention period. The results indicate that the developed CBT program might be effective for improving the resilience of adolescents with alcohol-dependent parents. Crown Copyright 2010. Published by Elsevier Inc. All rights reserved.

  11. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  12. A Meta-Analysis of Massage Therapy Research

    ERIC Educational Resources Information Center

    Moyer, Christopher A.; Rounds, James; Hannum, James W.

    2004-01-01

    Massage therapy (MT) is an ancient form of treatment that is now gaining popularity as part of the complementary and alternative medical therapy movement. A meta-analysis was conducted of studies that used random assignment to test the effectiveness of MT. Mean effect sizes were calculated from 37 studies for 9 dependent variables. Single…

  13. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  14. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  15. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  16. Intervention to improve social and family support for caregivers of dependent patients: ICIAS study protocol.

    PubMed

    Rosell-Murphy, Magdalena; Bonet-Simó, Josep M; Baena, Esther; Prieto, Gemma; Bellerino, Eva; Solé, Francesc; Rubio, Montserrat; Krier, Ilona; Torres, Pascuala; Mimoso, Sonia

    2014-03-25

    Despite the existence of formal professional support services, informal support (mainly family members) continues to be the main source of eldercare, especially for those who are dependent or disabled. Professionals on the primary health care are the ideal choice to educate, provide psychological support, and help to mobilize social resources available to the informal caregiver.Controversy remains concerning the efficiency of multiple interventions, taking a holistic approach to both the patient and caregiver, and optimum utilization of the available community resources. .For this reason our goal is to assess whether an intervention designed to improve the social support for caregivers effectively decreases caregivers burden and improves their quality of life. CONTROLled, multicentre, community intervention trial, with patients and their caregivers randomized to the intervention or control group according to their assigned Primary Health Care Team (PHCT). Primary Health Care network (9 PHCTs). Primary informal caregivers of patients receiving home health care from participating PHCTs. Required sample size is 282 caregivers (141 from PHCTs randomized to the intervention group and 141 from PHCTs randomized to the control group. a) PHCT professionals: standardized training to implement caregivers intervention. b) Caregivers: 1 individualized counselling session, 1 family session, and 4 educational group sessions conducted by participating PHCT professionals; in addition to usual home health care visits, periodic telephone follow-up contact and unlimited telephone support. Caregivers and dependent patients: usual home health care, consisting of bimonthly scheduled visits, follow-up as needed, and additional attention upon request.Data analysisDependent variables: Caregiver burden (short-form Zarit test), caregivers' social support (Medical Outcomes Study), and caregivers' reported quality of life (SF-12)INDEPENDENT VARIABLES: a) Caregiver: sociodemographic data, Goldberg Scale, Apgar family questionnaire, Holmes and Rahe Psychosocial Stress Scale, number of chronic diseases. b) Dependent patient: sociodemographic data, level of dependency (Barthel Index), cognitive impairment (Pfeiffer test). If the intervention intended to improve social and family support is effective in reducing the burden on primary informal caregivers of dependent patients, this model can be readily applied throughout usual PHCT clinical practice. Clinical trials registrar: NCT02065427.

  17. Intervention to improve social and family support for caregivers of dependent patients: ICIAS study protocol

    PubMed Central

    2014-01-01

    Background Despite the existence of formal professional support services, informal support (mainly family members) continues to be the main source of eldercare, especially for those who are dependent or disabled. Professionals on the primary health care are the ideal choice to educate, provide psychological support, and help to mobilize social resources available to the informal caregiver. Controversy remains concerning the efficiency of multiple interventions, taking a holistic approach to both the patient and caregiver, and optimum utilization of the available community resources. .For this reason our goal is to assess whether an intervention designed to improve the social support for caregivers effectively decreases caregivers burden and improves their quality of life. Methods/design Design: Controlled, multicentre, community intervention trial, with patients and their caregivers randomized to the intervention or control group according to their assigned Primary Health Care Team (PHCT). Study area: Primary Health Care network (9 PHCTs). Study participants: Primary informal caregivers of patients receiving home health care from participating PHCTs. Sample: Required sample size is 282 caregivers (141 from PHCTs randomized to the intervention group and 141 from PHCTs randomized to the control group. Intervention: a) PHCT professionals: standardized training to implement caregivers intervention. b) Caregivers: 1 individualized counselling session, 1 family session, and 4 educational group sessions conducted by participating PHCT professionals; in addition to usual home health care visits, periodic telephone follow-up contact and unlimited telephone support. Control: Caregivers and dependent patients: usual home health care, consisting of bimonthly scheduled visits, follow-up as needed, and additional attention upon request. Data analysis Dependent variables: Caregiver burden (short-form Zarit test), caregivers’ social support (Medical Outcomes Study), and caregivers’ reported quality of life (SF-12) Independent variables: a) Caregiver: sociodemographic data, Goldberg Scale, Apgar family questionnaire, Holmes and Rahe Psychosocial Stress Scale, number of chronic diseases. b) Dependent patient: sociodemographic data, level of dependency (Barthel Index), cognitive impairment (Pfeiffer test). Discussion If the intervention intended to improve social and family support is effective in reducing the burden on primary informal caregivers of dependent patients, this model can be readily applied throughout usual PHCT clinical practice. Trial registration Clinical trials registrar: NCT02065427 PMID:24666438

  18. Hierarchical Bayesian spatial models for predicting multiple forest variables using waveform LiDAR, hyperspectral imagery, and large inventory datasets

    USGS Publications Warehouse

    Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.

    2013-01-01

    In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.

  19. Average inactivity time model, associated orderings and reliability properties

    NASA Astrophysics Data System (ADS)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  20. Solvency supervision based on a total balance sheet approach

    NASA Astrophysics Data System (ADS)

    Pitselis, Georgios

    2009-11-01

    In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.

  1. Stochastic stability of parametrically excited random systems

    NASA Astrophysics Data System (ADS)

    Labou, M.

    2004-01-01

    Multidegree-of-freedom dynamic systems subjected to parametric excitation are analyzed for stochastic stability. The variation of excitation intensity with time is described by the sum of a harmonic function and a stationary random process. The stability boundaries are determined by the stochastic averaging method. The effect of random parametric excitation on the stability of trivial solutions of systems of differential equations for the moments of phase variables is studied. It is assumed that the frequency of harmonic component falls within the region of combination resonances. Stability conditions for the first and second moments are obtained. It turns out that additional parametric excitation may have a stabilizing or destabilizing effect, depending on the values of certain parameters of random excitation. As an example, the stability of a beam in plane bending is analyzed.

  2. A hybrid CS-SA intelligent approach to solve uncertain dynamic facility layout problems considering dependency of demands

    NASA Astrophysics Data System (ADS)

    Moslemipour, Ghorbanali

    2018-07-01

    This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.

  3. Multivariate dynamic Tobit models with lagged observed dependent variables: An effectiveness analysis of highway safety laws.

    PubMed

    Dong, Chunjiao; Xie, Kun; Zeng, Jin; Li, Xia

    2018-04-01

    Highway safety laws aim to influence driver behaviors so as to reduce the frequency and severity of crashes, and their outcomes. For one specific highway safety law, it would have different effects on the crashes across severities. Understanding such effects can help policy makers upgrade current laws and hence improve traffic safety. To investigate the effects of highway safety laws on crashes across severities, multivariate models are needed to account for the interdependency issues in crash counts across severities. Based on the characteristics of the dependent variables, multivariate dynamic Tobit (MVDT) models are proposed to analyze crash counts that are aggregated at the state level. Lagged observed dependent variables are incorporated into the MVDT models to account for potential temporal correlation issues in crash data. The state highway safety law related factors are used as the explanatory variables and socio-demographic and traffic factors are used as the control variables. Three models, a MVDT model with lagged observed dependent variables, a MVDT model with unobserved random variables, and a multivariate static Tobit (MVST) model are developed and compared. The results show that among the investigated models, the MVDT models with lagged observed dependent variables have the best goodness-of-fit. The findings indicate that, compared to the MVST, the MVDT models have better explanatory power and prediction accuracy. The MVDT model with lagged observed variables can better handle the stochasticity and dependency in the temporal evolution of the crash counts and the estimated values from the model are closer to the observed values. The results show that more lives could be saved if law enforcement agencies can make a sustained effort to educate the public about the importance of motorcyclists wearing helmets. Motor vehicle crash-related deaths, injuries, and property damages could be reduced if states enact laws for stricter text messaging rules, higher speeding fines, older licensing age, and stronger graduated licensing provisions. Injury and PDO crashes would be significantly reduced with stricter laws prohibiting the use of hand-held communication devices and higher fines for drunk driving. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Growth Modeling with Nonignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    ERIC Educational Resources Information Center

    Muthen, Bengt; Asparouhov, Tihomir; Hunter, Aimee M.; Leuchter, Andrew F.

    2011-01-01

    This article uses a general latent variable framework to study a series of models for nonignorable missingness due to dropout. Nonignorable missing data modeling acknowledges that missingness may depend not only on covariates and observed outcomes at previous time points as with the standard missing at random assumption, but also on latent…

  5. Efficient Inference for Trees and Alignments: Modeling Monolingual and Bilingual Syntax with Hard and Soft Constraints and Latent Variables

    ERIC Educational Resources Information Center

    Smith, David Arthur

    2010-01-01

    Much recent work in natural language processing treats linguistic analysis as an inference problem over graphs. This development opens up useful connections between machine learning, graph theory, and linguistics. The first part of this dissertation formulates syntactic dependency parsing as a dynamic Markov random field with the novel…

  6. Using Design-Based Latent Growth Curve Modeling with Cluster-Level Predictor to Address Dependency

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L.

    2014-01-01

    The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…

  7. Dissociable effects of practice variability on learning motor and timing skills.

    PubMed

    Caramiaux, Baptiste; Bevilacqua, Frédéric; Wanderley, Marcelo M; Palmer, Caroline

    2018-01-01

    Motor skill acquisition inherently depends on the way one practices the motor task. The amount of motor task variability during practice has been shown to foster transfer of the learned skill to other similar motor tasks. In addition, variability in a learning schedule, in which a task and its variations are interweaved during practice, has been shown to help the transfer of learning in motor skill acquisition. However, there is little evidence on how motor task variations and variability schedules during practice act on the acquisition of complex motor skills such as music performance, in which a performer learns both the right movements (motor skill) and the right time to perform them (timing skill). This study investigated the impact of rate (tempo) variability and the schedule of tempo change during practice on timing and motor skill acquisition. Complete novices, with no musical training, practiced a simple musical sequence on a piano keyboard at different rates. Each novice was assigned to one of four learning conditions designed to manipulate the amount of tempo variability across trials (large or small tempo set) and the schedule of tempo change (randomized or non-randomized order) during practice. At test, the novices performed the same musical sequence at a familiar tempo and at novel tempi (testing tempo transfer), as well as two novel (but related) sequences at a familiar tempo (testing spatial transfer). We found that practice conditions had little effect on learning and transfer performance of timing skill. Interestingly, practice conditions influenced motor skill learning (reduction of movement variability): lower temporal variability during practice facilitated transfer to new tempi and new sequences; non-randomized learning schedule improved transfer to new tempi and new sequences. Tempo (rate) and the sequence difficulty (spatial manipulation) affected performance variability in both timing and movement. These findings suggest that there is a dissociable effect of practice variability on learning complex skills that involve both motor and timing constraints.

  8. Estimating the price elasticity of beer: meta-analysis of data with heterogeneity, dependence, and publication bias.

    PubMed

    Nelson, Jon P

    2014-01-01

    Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Spatio-temporal modelling of wind speed variations and extremes in the Caribbean and the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Rychlik, Igor; Mao, Wengang

    2018-02-01

    The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.

  10. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: An instrumental variables re-analysis of randomized clinical trials

    PubMed Central

    Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.

    2014-01-01

    Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504

  11. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: an instrumental variables re-analysis of randomized clinical trials.

    PubMed

    Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H

    2014-11-01

    Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.

  12. Alternate methods for FAAT S-curve generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaufman, A.M.

    The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less

  13. Effect of multiplicative noise on stationary stochastic process

    NASA Astrophysics Data System (ADS)

    Kargovsky, A. V.; Chikishev, A. Yu.; Chichigina, O. A.

    2018-03-01

    An open system that can be analyzed using the Langevin equation with multiplicative noise is considered. The stationary state of the system results from a balance of deterministic damping and random pumping simulated as noise with controlled periodicity. The dependence of statistical moments of the variable that characterizes the system on parameters of the problem is studied. A nontrivial decrease in the mean value of the main variable with an increase in noise stochasticity is revealed. Applications of the results in several physical, chemical, biological, and technical problems of natural and humanitarian sciences are discussed.

  14. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  15. Mum, why do you keep on growing? Impacts of environmental variability on optimal growth and reproduction allocation strategies of annual plants.

    PubMed

    De Lara, Michel

    2006-05-01

    In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.

  16. Non-random mating and convergence over time for alcohol consumption, smoking, and exercise: the Nord-Trøndelag Health Study.

    PubMed

    Ask, Helga; Rognmo, Kamilla; Torvik, Fartein Ask; Røysamb, Espen; Tambs, Kristian

    2012-05-01

    Spouses tend to have similar lifestyles. We explored the degree to which spouse similarity in alcohol use, smoking, and physical exercise is caused by non-random mating or convergence. We used data collected for the Nord-Trøndelag Health Study from 1984 to 1986 and prospective registry information about when and with whom people entered marriage/cohabitation between 1970 and 2000. Our sample included 19,599 married/cohabitating couples and 1,551 future couples that were to marry/cohabitate in the 14-16 years following data collection. All couples were grouped according to the duration between data collection and entering into marriage/cohabitation. Age-adjusted polychoric spouse correlations were used as the dependent variables in non-linear segmented regression analysis; the independent variable was time. The results indicate that spouse concordance in lifestyle is due to both non-random mating and convergence. Non-random mating appeared to be strongest for smoking. Convergence in alcohol use and smoking was evident during the period prior to marriage/cohabitation, whereas convergence in exercise was evident throughout life. Reduced spouse similarity in smoking with relationship duration may reflect secular trends.

  17. Pharmacodynamic effects of the fetal estrogen estetrol in postmenopausal women: results from a multiple-rising-dose study.

    PubMed

    Coelingh Bennink, Herjan J T; Verhoeven, Carole; Zimmerman, Yvette; Visser, Monique; Foidart, Jean-Michel; Gemzell-Danielsson, Kristina

    2017-06-01

    Estetrol (E4) is an estrogen produced exclusively by the human fetal liver during pregnancy. In this study the pharmacodynamic effects of escalating doses of E4 in postmenopausal women were investigated. This was a partly randomized, open-label, multiple-rising-dose study in 49 postmenopausal women. Participants were randomized to receive either 2 mg E4 or 2 mg estradiol-valerate (E2 V) for 28 days. Subsequent dose-escalation groups were (non-randomized): 10, 20 and 40 mg E4. Blood samples were collected regularly for measuring endocrine and hemostasis variables, lipids and lipoproteins, fasting glucose and bone turnover markers. Estetrol treatment resulted in a decrease of follicle-stimulating hormone and luteinizing hormone and an increase of sex-hormone binding globulin. Changes in hemostasis variables were small. A lowering effect on low-density lipoprotein cholesterol was accompanied with an increase in high-density lipoprotein cholesterol and no or minimal changes in triglycerides. The considerable decrease in osteocalcin levels in the three highest E4 dose groups and the small decrease in C-telopeptide levels were comparable to the E2 V control group and suggest a preventive effect on bone loss. All changes observed were dose-dependent. In this study, estetrol treatment showed dose-dependent estrogenic effects on endocrine parameters, bone turnover markers, and lipids and lipoproteins. The effect on triglycerides was small as were the effects on hemostatic variables. These results support the further investigation of estetrol as a candidate for hormone therapy. Quantitatively, the effects of 10 mg estetrol were similar to the study comparator 2 mg estradiol valerate.

  18. Population and prehistory III: food-dependent demography in variable environments.

    PubMed

    Lee, Charlotte T; Puleston, Cedric O; Tuljapurkar, Shripad

    2009-11-01

    The population dynamics of preindustrial societies depend intimately on their surroundings, and food is a primary means through which environment influences population size and individual well-being. Food production requires labor; thus, dependence of survival and fertility on food involves dependence of a population's future on its current state. We use a perturbation approach to analyze the effects of random environmental variation on this nonlinear, age-structured system. We show that in expanding populations, direct environmental effects dominate induced population fluctuations, so environmental variability has little effect on mean hunger levels, although it does decrease population growth. The growth rate determines the time until population is limited by space. This limitation introduces a tradeoff between population density and well-being, so population effects become more important than the direct effects of the environment: environmental fluctuation increases mortality, releasing density dependence and raising average well-being for survivors. We discuss the social implications of these findings for the long-term fate of populations as they transition from expansion into limitation, given that conditions leading to high well-being during growth depress well-being during limitation.

  19. Just entertainment: effects of TV series about intrigue on young adults

    PubMed Central

    Wang, Fei; Lin, Shengdong; Ke, Xue

    2015-01-01

    The potential harmful effects of media violence have been studied systematically and extensively. However, very little attention has been devoted to the intrigue and struggles between people depicted in the mass media. A longitudinal randomized experimental group-control group, pretest–posttest design study was conducted to examine the potential effects of this type of TV series on young adults. A typical and popular TV series was select as a stimulus. By scrutinizing the outline of this TV series and inspired by studies of the effects of media violence, one behavioral observation and five scales were adopted as dependent measures. The study did not find any effect of the intrigue TV series on any of the six dependent variables. Finally, possible interference variables or moderators were discussed. PMID:26029127

  20. Optimal positions and parameters of translational and rotational mass dampers in beams subjected to random excitation

    NASA Astrophysics Data System (ADS)

    Łatas, Waldemar

    2018-01-01

    The problem of vibrations of the beam with the attached system of translational and rotational dynamic mass dampers subjected to random excitations with peaked power spectral densities, is presented in the hereby paper. The Euler-Bernoulli beam model is applied, while for solving the equation of motion the Galerkin method and the Laplace time transform are used. The obtained transfer functions allow to determine power spectral densities of the beam deflection and other dependent variables. Numerical examples present simple optimization problems of mass dampers parameters for local and global objective functions.

  1. Which measures of cigarette dependence are predictors of smoking cessation during pregnancy? Analysis of data from a randomized controlled trial.

    PubMed

    Riaz, Muhammad; Lewis, Sarah; Coleman, Tim; Aveyard, Paul; West, Robert; Naughton, Felix; Ussher, Michael

    2016-09-01

    To examine the ability of different common measures of cigarette dependence to predict smoking cessation during pregnancy. Secondary analysis of data from a parallel-group randomized controlled trial of physical activity for smoking cessation. The outcomes were biochemically validated smoking abstinence at 4 weeks post-quit and end-of-pregnancy. Women identified as smokers in antenatal clinics in 13 hospital trusts predominantly in southern England, who were recruited to a smoking cessation trial. Of 789 pregnant smokers recruited, 784 were included in the analysis. Using random-effect logistic regression models, we analysed the effects of baseline measures of cigarette dependence, including numbers of cigarettes smoked daily, Fagerström Test of Cigarette Dependence (FTCD) score, the two FTCD subscales of Heaviness of Smoking Index (HSI) and non-Heaviness of Smoking Index (non-HSI), expired carbon monoxide (CO) level and urges to smoke (strength and frequency) on smoking cessation. Associations were adjusted for significant socio-demographic/health behaviour predictors and trial variables, and area under the receiver operating characteristic (ROC) curve was used to determine the predictive ability of the model for each measure of dependence. All the dependence variables predicted abstinence at 4 weeks and end-of-pregnancy. At 4 weeks, the adjusted odds ratio (OR) (95% confidence interval) for a unit standard deviation increase in FTCD was 0.59 (0.47-0.74), expired CO = 0.54 (0.41-0.71), number of cigarettes smoked per day 0.65 (0.51-0.84) and frequency of urges to smoke 0.79 (0.63-0.98); at end-of-pregnancy they were: 0.60 (0.45-0.81), 0.55 (0.37-0.80), 0.70 (0.49-0.98) and 0.69 (0.51-0.94), respectively. HSI and non-HSI exhibited similar results to the full FTCD. Four common measures of dependence, including number of cigarettes smoked per day, scores for Fagerström Test of Cigarette Dependence and frequency of urges and level of expired CO, all predicted smoking abstinence in the short term during pregnancy and at end-of-pregnancy with very similar predictive validity. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  2. Some Correlates of Risky Sexual Behavior among Secondary School Adolescents in Ogun State, Nigeria

    ERIC Educational Resources Information Center

    Adeyemo, D. A.; Williams, T. M.

    2009-01-01

    The purpose of the study is to examine factors associated with risky sexual behaviors among secondary school adolescents in Ogun State, Nigeria. Two hundred and fifty adolescents randomly selected from three schools participated in the study. The ages of the participants ranged from 13 to 18 years. Both the independent and dependent variables were…

  3. Some Tours Are More Equal than Others: The Convex-Hull Model Revisited with Lessons for Testing Models of the Traveling Salesperson Problem

    ERIC Educational Resources Information Center

    Tak, Susanne; Plaisier, Marco; van Rooij, Iris

    2008-01-01

    To explain human performance on the "Traveling Salesperson" problem (TSP), MacGregor, Ormerod, and Chronicle (2000) proposed that humans construct solutions according to the steps described by their convex-hull algorithm. Focusing on tour length as the dependent variable, and using only random or semirandom point sets, the authors…

  4. High resolution satellite remote sensing used in a stratified random sampling scheme to quantify the constituent land cover components of the shifting cultivation mosaic of the Democratic Republic of Congo

    NASA Astrophysics Data System (ADS)

    Molinario, G.; Hansen, M.; Potapov, P.

    2016-12-01

    High resolution satellite imagery obtained from the National Geospatial Intelligence Agency through NASA was used to photo-interpret sample areas within the DRC. The area sampled is a stratifcation of the forest cover loss from circa 2014 that either occurred completely within the previosly mapped homogenous area of the Rural Complex, at it's interface with primary forest, or in isolated forest perforations. Previous research resulted in a map of these areas that contextualizes forest loss depending on where it occurs and with what spatial density, leading to a better understading of the real impacts on forest degradation of livelihood shifting cultivation. The stratified random sampling approach of these areas allows the characterization of the constituent land cover types within these areas, and their variability throughout the DRC. Shifting cultivation has a variable forest degradation footprint in the DRC depending on many factors that drive it, but it's role in forest degradation and deforestation had been disputed, leading us to investigate and quantify the clearing and reuse rates within the strata throughout the country.

  5. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  6. Stochastic species abundance models involving special copulas

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2018-01-01

    Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.

  7. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  8. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  9. Influence of hydroxypropyl methylcellulose on drug release pattern of a gastroretentive floating drug delivery system using a 3(2) full factorial design.

    PubMed

    Swain, Kalpana; Pattnaik, Satyanarayan; Mallick, Subrata; Chowdary, Korla Appana

    2009-01-01

    In the present investigation, controlled release gastroretentive floating drug delivery system of theophylline was developed employing response surface methodology. A 3(2) randomized full factorial design was developed to study the effect of formulation variables like various viscosity grades and contents of hydroxypropyl methylcellulose (HPMC) and their interactions on response variables. The floating lag time for all nine experimental trial batches were less than 2 min and floatation time of more than 12 h. Theophylline release from the polymeric matrix system followed non-Fickian anomalous transport. Multiple regression analysis revealed that both viscosity and content of HPMC had statistically significant influence on all dependent variables but the effect of these variables found to be nonlinear above certain threshold values.

  10. Determining Scale-dependent Patterns in Spatial and Temporal Datasets

    NASA Astrophysics Data System (ADS)

    Roy, A.; Perfect, E.; Mukerji, T.; Sylvester, L.

    2016-12-01

    Spatial and temporal datasets of interest to Earth scientists often contain plots of one variable against another, e.g., rainfall magnitude vs. time or fracture aperture vs. spacing. Such data, comprised of distributions of events along a transect / timeline along with their magnitudes, can display persistent or antipersistent trends, as well as random behavior, that may contain signatures of underlying physical processes. Lacunarity is a technique that was originally developed for multiscale analysis of data. In a recent study we showed that lacunarity can be used for revealing changes in scale-dependent patterns in fracture spacing data. Here we present a further improvement in our technique, with lacunarity applied to various non-binary datasets comprised of event spacings and magnitudes. We test our technique on a set of four synthetic datasets, three of which are based on an autoregressive model and have magnitudes at every point along the "timeline" thus representing antipersistent, persistent, and random trends. The fourth dataset is made up of five clusters of events, each containing a set of random magnitudes. The concept of lacunarity ratio, LR, is introduced; this is the lacunarity of a given dataset normalized to the lacunarity of its random counterpart. It is demonstrated that LR can successfully delineate scale-dependent changes in terms of antipersistence and persistence in the synthetic datasets. This technique is then applied to three different types of data: a hundred-year rainfall record from Knoxville, TN, USA, a set of varved sediments from Marca Shale, and a set of fracture aperture and spacing data from NE Mexico. While the rainfall data and varved sediments both appear to be persistent at small scales, at larger scales they both become random. On the other hand, the fracture data shows antipersistence at small scale (within cluster) and random behavior at large scales. Such differences in behavior with respect to scale-dependent changes in antipersistence to random, persistence to random, or otherwise, maybe be related to differences in the physicochemical properties and processes contributing to multiscale datasets.

  11. Benchmarking dairy herd health status using routinely recorded herd summary data.

    PubMed

    Parker Gaddis, K L; Cole, J B; Clay, J S; Maltecca, C

    2016-02-01

    Genetic improvement of dairy cattle health through the use of producer-recorded data has been determined to be feasible. Low estimated heritabilities indicate that genetic progress will be slow. Variation observed in lowly heritable traits can largely be attributed to nongenetic factors, such as the environment. More rapid improvement of dairy cattle health may be attainable if herd health programs incorporate environmental and managerial aspects. More than 1,100 herd characteristics are regularly recorded on farm test-days. We combined these data with producer-recorded health event data, and parametric and nonparametric models were used to benchmark herd and cow health status. Health events were grouped into 3 categories for analyses: mastitis, reproductive, and metabolic. Both herd incidence and individual incidence were used as dependent variables. Models implemented included stepwise logistic regression, support vector machines, and random forests. At both the herd and individual levels, random forest models attained the highest accuracy for predicting health status in all health event categories when evaluated with 10-fold cross-validation. Accuracy (SD) ranged from 0.61 (0.04) to 0.63 (0.04) when using random forest models at the herd level. Accuracy of prediction (SD) at the individual cow level ranged from 0.87 (0.06) to 0.93 (0.001) with random forest models. Highly significant variables and key words from logistic regression and random forest models were also investigated. All models identified several of the same key factors for each health event category, including movement out of the herd, size of the herd, and weather-related variables. We concluded that benchmarking health status using routinely collected herd data is feasible. Nonparametric models were better suited to handle this complex data with numerous variables. These data mining techniques were able to perform prediction of health status and could add evidence to personal experience in herd management. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Effectiveness Trial of Community-Based I Choose Life-Africa Human Immunodeficiency Virus Prevention Program in Kenya

    PubMed Central

    Adam, Mary B.

    2014-01-01

    We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. PMID:24957544

  13. Effectiveness trial of community-based I Choose Life-Africa human immunodeficiency virus prevention program in Kenya.

    PubMed

    Adam, Mary B

    2014-09-01

    We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. © The American Society of Tropical Medicine and Hygiene.

  14. Two-key concurrent responding: response-reinforcement dependencies and blackouts1

    PubMed Central

    Herbert, Emily W.

    1970-01-01

    Two-key concurrent responding was maintained for three pigeons by a single variable-interval 1-minute schedule of reinforcement in conjunction with a random number generator that assigned feeder operations between keys with equal probability. The duration of blackouts was varied between keys when each response initiated a blackout, and grain arranged by the variable-interval schedule was automatically presented after a blackout (Exp. I). In Exp. II every key peck, except for those that produced grain, initiated a blackout, and grain was dependent upon a response following a blackout. For each pigeon in Exp. I and for one pigeon in Exp. II, the relative frequency of responding on a key approximated, i.e., matched, the relative reciprocal of the duration of the blackout interval on that key. In a third experiment, blackouts scheduled on a variable-interval were of equal duration on the two keys. For one key, grain automatically followed each blackout; for the other key, grain was dependent upon a response and never followed a blackout. The relative frequency of responding on the former key, i.e., the delay key, better approximated the negative exponential function obtained by Chung (1965) than the matching function predicted by Chung and Herrnstein (1967). PMID:16811458

  15. Correlates of blood pressure in young insulin-dependent diabetics and their families.

    PubMed

    Tarn, A C; Thomas, J M; Drury, P L

    1990-09-01

    We compared the correlates of blood pressure in 163 young patients with insulin-dependent diabetes and in 232 of their non-diabetic siblings. A single observer recorded blood pressure in all subjects, plus all their available parents, using a standardized technique. Other variables recorded included age, weight, height, presence of diabetes and urinary albumin. The major factors accounting for over 50% of the variance of systolic blood pressure (SBP) in both groups were age, weight, paternal SBP and sex. In addition, in the diabetic group the logarithm of the random urinary albumin concentration was a significant explanatory variable. For diastolic blood pressure (DBP) approximately 16% of the variance was explained by age, weight and maternal DBP. Parental blood pressure was an important determinant of blood pressure in both the diabetic and non-diabetic sibling groups. The similarity of the correlates of blood pressure in the two groups suggests that the determinants of blood pressure in young insulin-dependent diabetic patients and in the general population are similar.

  16. ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference.

    PubMed

    van Breukelen, Gerard J P

    2013-11-01

    The pretest-posttest control group design can be analyzed with the posttest as dependent variable and the pretest as covariate (ANCOVA) or with the difference between posttest and pretest as dependent variable (CHANGE). These 2 methods can give contradictory results if groups differ at pretest, a phenomenon that is known as Lord's paradox. Literature claims that ANCOVA is preferable if treatment assignment is based on randomization or on the pretest and questionable for preexisting groups. Some literature suggests that Lord's paradox has to do with measurement error in the pretest. This article shows two new things: First, the claims are confirmed by proving the mathematical equivalence of ANCOVA to a repeated measures model without group effect at pretest. Second, correction for measurement error in the pretest is shown to lead back to ANCOVA or to CHANGE, depending on the assumed absence or presence of a true group difference at pretest. These two new theoretical results are illustrated with multilevel (mixed) regression and structural equation modeling of data from two studies.

  17. Evaluation of family intervention through unobtrusive audio recordings: experiences in "bugging" children.

    PubMed

    Johnson, S M; Christensen, A; Bellamy, G T

    1976-01-01

    Five children referred to a child-family intervention program wore a radio transmitter in the home during pre-intervention and termination assessments. The transmitter broadcast to a receiver-recording apparatus in the home (either activated by an interval timer at predetermined "random" times or by parents at predetermined "picked" times). "Picked" times were parent-selected situations during which problems typically occurred (e.g., bedtime). Parents activated the recorder regularly whether or not problems occurred. Child-deviant, parent-negative, and parent-commanding behaviors were significantly higher at the picked times during pretest than at random times. At posttest, behaviors in all three classes were substantially reduced at picked times, but not at random times. For individual subject data, reductions occurred in at least two of the three dependent variables for three of the five cases during random time assessments. In general, the behavioral outcome data corresponded to parent-attitude reports and parent-collected observation data.

  18. A Model for Pharmacological Research-Treatment of Cocaine Dependence

    PubMed Central

    Montoya, Ivan D.; Hess, Judith M.; Preston, Kenzie L.; Gorelick, David A.

    2008-01-01

    Major problems for research on pharmacological treatments for cocaine dependence are lack of comparability of results from different treatment research programs and poor validity and/or reliability of results. Double-blind, placebo-controlled, random assignment, experimental designs, using standard intake and assessment procedures help to reduce these problems. Cessation or reduction of drug use and/or craving, retention in treatment, and medical and psychosocial improvement are some of the outcome variables collected in treatment research programs. A model to be followed across different outpatient clinical trials for pharmacological treatment of cocaine dependence is presented here. This model represents an effort to standardize data collection to make results more valid and comparable. PMID:8749725

  19. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  20. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  1. Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane

    NASA Technical Reports Server (NTRS)

    Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

    2000-01-01

    This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

  2. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298

  3. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.

  4. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  5. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  6. Cognitive impairment and dependence of patients with diabetes older than 65 years old in an urban area (DERIVA study).

    PubMed

    Rodríguez-Sánchez, Emiliano; Mora-Simón, Sara; Patino-Alonso, María C; Pérez-Arechaederra, Diana; Recio-Rodríguez, José I; Gómez-Marcos, Manuel A; Valero-Juan, Luis F; García-Ortiz, Luis

    2016-02-01

    We analyzed the associations between diabetes and cognitive impairment (CI) and dependence in a population of patients 65 years or older. Cross-sectional study. We randomly selected 311 participants over the age of 65 living in an urban area of Spain. The mean age of the cohort was 75.89 ± 7.12 years, and 69 of the individuals (22.2%) had diabetes. Two questionnaires were used to assess cognitive performance (MMSE and Seven Minute Screen Test), and two assessments were used to evaluate patient dependence (Barthel Index and Lawton-Brody Index). Clinical information and sociodemographic data were also gathered. Nearly one quarter of patients with diabetes (21.7%) lived alone. Diabetic patients were more sedentary (p = .033) than non-diabetic patients. Roughly one sixth (15.3%) of the diabetics and 10.1% of the non-diabetics were depressed (p = .332). CI was present in 26.1% of the diabetics and 14.5% of non-diabetics (p = .029). Diabetic patients had a MMSE score that was significantly worse than non-diabetics (24.88 ± 4.74 vs 26.05 ± 4.03; p <.05), but no differences were found in the Seven Minute Screen Test. Logistic regressions revealed that the presence of diabetes was independently associated with CI (adjusted for age, gender, years of education, sedentary lifestyle, body mass index, diastolic blood pressure, cholesterol, and depression (OR = 2.940, p = .013). Patients with diabetes showed greater dependence, as measured by the Barthel Index (p = .03) and Lawton-Brody Index (p <.01). Nevertheless, when dependence (dependence or not dependence for each questionnaire) used as a dependent variable in the logistic regression analyses, no significant association with diabetes was found, after adjusting for confounding variables. Diabetic patients over the age of 65 are more likely to present CI but not dependence. These findings support the need to include both a functional and cognitive assessment as necessary components in a standard evaluation in both clinical guides and randomized trials of therapeutic interventions in patients with diabetes.

  7. Indirect costs of teaching in Canadian hospitals.

    PubMed Central

    MacKenzie, T A; Willan, A R; Cox, M A; Green, A

    1991-01-01

    We sought to determine whether there are indirect costs of teaching in Canadian hospitals. To examine cost differences between teaching and nonteaching hospitals we estimated two cost functions: cost per case and cost per patient-day (dependent variables). The independent variables were number of beds, occupancy rate, teaching ratio (number of residents and interns per 100 beds), province, urbanicity (the population density of the county in which the hospital was situated) and wage index. Within each hospital we categorized a random sample of patient discharges according to case mix and severity of illness using age and standard diagnosis and procedure codes. Teaching ratio and case severity were each highly correlated positively with the dependent variables. The other variables that led to higher costs in teaching hospitals were wage rates and number of beds. Our regression model could serve as the basis of a reimbursement system, adjusted for severity and teaching status, particularly in provinces moving toward introducing case-weighting mechanisms into their payment model. Even if teaching hospitals were paid more than nonteaching hospitals because of the difference in the severity of illness there should be an additional allowance to cover the indirect costs of teaching. PMID:1898870

  8. Centre-related variability in hospital admissions of patients with spondyloarthritis.

    PubMed

    Andrés, Mariano; Sivera, Francisca; Pérez-Vicente, Sabina; Carmona, Loreto; Vela, Paloma

    2016-09-01

    The aim of this study was to explore the variability in hospital admissions of patients with spondyloarthritis (SpA) in Spain, and the centre factors that may influence that variability. Descriptive cross-sectional study, part of the emAR II study, performed in Spain (2009-2010). Health records of patients with a diagnosis of SpA and at least one visit to the rheumatology units within the previous 2 years were reviewed. Variables related to hospital admissions, to the SpA, and to the patient and centre were collected. A multilevel logistic regression analysis of random intercept with non-random slopes was performed to assess variability between centres. From 45 centres, 1168 patients' health records were reviewed. Main SpA forms were ankylosing spondylitis (55.2 %) and psoriatic arthritis (22.2 %). A total of 248 admissions were registered for 196 patients (19.2 %, n = 1020). An adjusted variability of 17.6 % in hospitalizations between centres was noted. The following hospital-related factors showed a significant association with admissions: the total number of admissions of the centre, the existence of electronic admission, and the availability of ultrasound in rheumatology. However, these factors only explained 42.9 % of the inter-centre variability. The risk of a patient with SpA of being admitted could double (median OR 2.09), depending on the hospital where the patient was being managed. Hospital admissions of patients with SpA varied between hospitals due to centre characteristics. Further studies are needed to ascertain which specific factors may be causing the variation, as studied variables explained less than half of the variability.

  9. Order priors for Bayesian network discovery with an application to malware phylogeny

    DOE PAGES

    Oyen, Diane; Anderson, Blake; Sentz, Kari; ...

    2017-09-15

    Here, Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges)more » in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.« less

  10. Order priors for Bayesian network discovery with an application to malware phylogeny

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyen, Diane; Anderson, Blake; Sentz, Kari

    Here, Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges)more » in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.« less

  11. MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.

    PubMed

    Lok, Judith J

    2017-04-01

    In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.

  12. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  13. Fluoridated elastomers: effect on the microbiology of plaque.

    PubMed

    Benson, Philip E; Douglas, C W Ian; Martin, Michael V

    2004-09-01

    The objective of this study was to investigate the effect of fluoridated elastomeric ligatures on the microbiology of local dental plaque in vivo. This randomized, prospective, longitudinal, clinical trial had a split-mouth crossover design. The subjects were 30 patients at the beginning of their treatment with fixed orthodontic appliances in the orthodontic departments of the Liverpool and the Sheffield dental hospitals in the United Kingdom. The study consisted of 2 experimental periods of 6 weeks with a washout period between. Fluoridated elastomers were randomly allocated at the first visit to be placed around brackets on tooth numbers 12, 11, 33 or 22, 21, 43. Nonfluoridated elastomers were placed on the contralateral teeth. Standard nonantibacterial fluoridated toothpaste and mouthwash were supplied. After 6 weeks (visit 2), the elastomers were removed, placed in transport media, and plated on agar within 2 hours. Nonfluoridated elastomers were placed on all brackets for 1 visit to allow for a washout period. At visit 3, fluoridated elastomers were placed on the teeth contralateral to those that received them at visit 1. At visit 4, the procedures at visit 2 were repeated. Samples were collected on visits 2 and 4. A logistic regression was performed, with the presence or absence of streptococcal or anaerobic growth as the dependent variable. A mixed-effects analysis of variance was carried out with the percentage of streptococcal or anaerobic bacterial count as the dependent variable. The only significant independent variables were the subject variable (P =<.001) for the percentage of streptococcal and anaerobic bacterial count and the visit variable for the percentage of streptococcal count (P =<.001). The use of fluoridated or nonfluoridated elastomers was not significant for percentage of either streptococcal (P =.288) or anaerobic count (P =.230). Fluoridated elastomers are not effective at reducing local streptococcal or anaerobic bacterial growth after a clinically relevant time in the mouth.

  14. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  15. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  16. Canonical correlation analysis of infant's size at birth and maternal factors: a study in rural northwest Bangladesh.

    PubMed

    Kabir, Alamgir; Merrill, Rebecca D; Shamim, Abu Ahmed; Klemn, Rolf D W; Labrique, Alain B; Christian, Parul; West, Keith P; Nasser, Mohammed

    2014-01-01

    This analysis was conducted to explore the association between 5 birth size measurements (weight, length and head, chest and mid-upper arm [MUAC] circumferences) as dependent variables and 10 maternal factors as independent variables using canonical correlation analysis (CCA). CCA considers simultaneously sets of dependent and independent variables and, thus, generates a substantially reduced type 1 error. Data were from women delivering a singleton live birth (n = 14,506) while participating in a double-masked, cluster-randomized, placebo-controlled maternal vitamin A or β-carotene supplementation trial in rural Bangladesh. The first canonical correlation was 0.42 (P<0.001), demonstrating a moderate positive correlation mainly between the 5 birth size measurements and 5 maternal factors (preterm delivery, early pregnancy MUAC, infant sex, age and parity). A significant interaction between infant sex and preterm delivery on birth size was also revealed from the score plot. Thirteen percent of birth size variability was explained by the composite score of the maternal factors (Redundancy, RY/X = 0.131). Given an ability to accommodate numerous relationships and reduce complexities of multiple comparisons, CCA identified the 5 maternal variables able to predict birth size in this rural Bangladesh setting. CCA may offer an efficient, practical and inclusive approach to assessing the association between two sets of variables, addressing the innate complexity of interactions.

  17. Reducing Dropout in Treatment for Depression: Translating Dropout Predictors Into Individualized Treatment Recommendations.

    PubMed

    Zilcha-Mano, Sigal; Keefe, John R; Chui, Harold; Rubin, Avinadav; Barrett, Marna S; Barber, Jacques P

    2016-12-01

    Premature discontinuation of therapy is a widespread problem that hampers the delivery of mental health treatment. A high degree of variability has been found among rates of premature treatment discontinuation, suggesting that rates may differ depending on potential moderators. In the current study, our aim was to identify demographic and interpersonal variables that moderate the association between treatment assignment and dropout. Data from a randomized controlled trial conducted from November 2001 through June 2007 (N = 156) comparing supportive-expressive therapy, antidepressant medication, and placebo for the treatment of depression (based on DSM-IV criteria) were used. Twenty prerandomization variables were chosen based on previous literature. These variables were subjected to exploratory bootstrapped variable selection and included in the logistic regression models if they passed variable selection. Three variables were found to moderate the association between treatment assignment and dropout: age, pretreatment therapeutic alliance expectations, and the presence of vindictive tendencies in interpersonal relationships. When patients were divided into those randomly assigned to their optimal treatment and those assigned to their least optimal treatment, dropout rates in the optimal treatment group (24.4%) were significantly lower than those in the least optimal treatment group (47.4%; P = .03). Present findings suggest that a patient's age and pretreatment interpersonal characteristics predict the association between common depression treatments and dropout rate. If validated by further studies, these characteristics can assist in reducing dropout through targeted treatment assignment. Secondary analysis of data from ClinicalTrials.gov identifier: NCT00043550. © Copyright 2016 Physicians Postgraduate Press, Inc.

  18. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  19. The Lambert Way to Gaussianize Heavy-Tailed Data with the Inverse of Tukey's h Transformation as a Special Case

    PubMed Central

    Goerg, Georg M.

    2015-01-01

    I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert  W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372

  20. Generated effect modifiers (GEM’s) in randomized clinical trials

    PubMed Central

    Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R. Todd

    2017-01-01

    In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an “effect modifier”. Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. PMID:27465235

  1. Prediction of hourly PM2.5 using a space-time support vector regression model

    NASA Astrophysics Data System (ADS)

    Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang

    2018-05-01

    Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.

  2. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  3. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  4. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  5. Recharge characteristics of an unconfined aquifer from the rainfall-water table relationship

    NASA Astrophysics Data System (ADS)

    Viswanathan, M. N.

    1984-02-01

    The determination of recharge levels of unconfined aquifers, recharged entirely by rainfall, is done by developing a model for the aquifer that estimates the water-table levels from the history of rainfall observations and past water-table levels. In the present analysis, the model parameters that influence the recharge were not only assumed to be time dependent but also to have varying dependence rates for various parameters. Such a model is solved by the use of a recursive least-squares method. The variable-rate parameter variation is incorporated using a random walk model. From the field tests conducted at Tomago Sandbeds, Newcastle, Australia, it was observed that the assumption of variable rates of time dependency of recharge parameters produced better estimates of water-table levels compared to that with constant-recharge parameters. It was observed that considerable recharge due to rainfall occurred on the very same day of rainfall. The increase in water-table level was insignificant for subsequent days of rainfall. The level of recharge very much depends upon the intensity and history of rainfall. Isolated rainfalls, even of the order of 25 mm day -1, had no significant effect on the water-table levels.

  6. Natural Resource Dependency and Decentralized Conservation Within Kanchenjunga Conservation Area Project, Nepal

    NASA Astrophysics Data System (ADS)

    Parker, Pete; Thapa, Brijesh

    2012-02-01

    Kanchenjunga Conservation Area Project (KCAP) in Nepal is among the first protected areas in the world to institute a completely decentralized system of conservation and development. Proponents of decentralized conservation claim that it increases management efficiency, enhances the responsiveness to local needs, and promotes greater equity among local residents. This study assessed local equity by evaluating the levels of dependencies on natural resources among households and the factors affecting that dependency. Data were collected via detailed surveys among 205 randomly selected households within the KCAP. Natural resource dependency was evaluated by comparing the ratio of total household income to income derived from access to natural resources. Economic, social, and access-related variables were employed to determine potential significant predictors of dependency. Overall, households were heavily dependent on natural resources for their income, especially households at higher elevations and those with more adult members. The households that received remittances were most able to supplement their income and, therefore, drastically reduced their reliance on the access to natural resources. Socio-economic variables, such as land holdings, education, caste, and ethnicity, failed to predict dependency. Household participation in KCAP-sponsored training programs also failed to affect household dependency; however, fewer than 20% of the households had any form of direct contact with KCAP personnel within the past year. The success of the KCAP as a decentralized conservation program is contingent on project capacity-building via social mobilization, training programs, and participatory inclusion in decision making to help alleviate the dependency on natural resources.

  7. Natural resource dependency and decentralized conservation within Kanchenjunga Conservation Area Project, Nepal.

    PubMed

    Parker, Pete; Thapa, Brijesh

    2012-02-01

    Kanchenjunga Conservation Area Project (KCAP) in Nepal is among the first protected areas in the world to institute a completely decentralized system of conservation and development. Proponents of decentralized conservation claim that it increases management efficiency, enhances the responsiveness to local needs, and promotes greater equity among local residents. This study assessed local equity by evaluating the levels of dependencies on natural resources among households and the factors affecting that dependency. Data were collected via detailed surveys among 205 randomly selected households within the KCAP. Natural resource dependency was evaluated by comparing the ratio of total household income to income derived from access to natural resources. Economic, social, and access-related variables were employed to determine potential significant predictors of dependency. Overall, households were heavily dependent on natural resources for their income, especially households at higher elevations and those with more adult members. The households that received remittances were most able to supplement their income and, therefore, drastically reduced their reliance on the access to natural resources. Socio-economic variables, such as land holdings, education, caste, and ethnicity, failed to predict dependency. Household participation in KCAP-sponsored training programs also failed to affect household dependency; however, fewer than 20% of the households had any form of direct contact with KCAP personnel within the past year. The success of the KCAP as a decentralized conservation program is contingent on project capacity-building via social mobilization, training programs, and participatory inclusion in decision making to help alleviate the dependency on natural resources.

  8. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  9. Security of BB84 with weak randomness and imperfect qubit encoding

    NASA Astrophysics Data System (ADS)

    Zhao, Liang-Yuan; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Fang, Xi; Han, Zheng-Fu; Huang, Wei

    2018-03-01

    The main threats for the well-known Bennett-Brassard 1984 (BB84) practical quantum key distribution (QKD) systems are that its encoding is inaccurate and measurement device may be vulnerable to particular attacks. Thus, a general physical model or security proof to tackle these loopholes simultaneously and quantitatively is highly desired. Here we give a framework on the security of BB84 when imperfect qubit encoding and vulnerability of measurement device are both considered. In our analysis, the potential attacks to measurement device are generalized by the recently proposed weak randomness model which assumes the input random numbers are partially biased depending on a hidden variable planted by an eavesdropper. And the inevitable encoding inaccuracy is also introduced here. From a fundamental view, our work reveals the potential information leakage due to encoding inaccuracy and weak randomness input. For applications, our result can be viewed as a useful tool to quantitatively evaluate the security of a practical QKD system.

  10. SETI and SEH (Statistical Equation for Habitables)

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-01-01

    The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.

  11. Effect of Position- and Velocity-Dependent Forces on Reaching Movements at Different Speeds

    PubMed Central

    Summa, Susanna; Casadio, Maura; Sanguineti, Vittorio

    2016-01-01

    The speed of voluntary movements is determined by the conflicting needs of maximizing accuracy and minimizing mechanical effort. Dynamic perturbations, e.g., force fields, may be used to manipulate movements in order to investigate these mechanisms. Here, we focus on how the presence of position- and velocity-dependent force fields affects the relation between speed and accuracy during hand reaching movements. Participants were instructed to perform reaching movements under visual control in two directions, corresponding to either low or high arm inertia. The subjects were required to maintain four different movement durations (very slow, slow, fast, very fast). The experimental protocol included three phases: (i) familiarization—the robot generated no force; (ii) force field—the robot generated a force; and (iii) after-effect—again, no force. Participants were randomly assigned to four groups, depending on the type of force that was applied during the “force field” phase. The robot was programmed to generate position-dependent forces—with positive (K+) or negative stiffness (K−)—or velocity-dependent forces, with either positive (B+) or negative viscosity (B−). We focused on path curvature, smoothness, and endpoint error; in the latter we distinguished between bias and variability components. Movements in the high-inertia direction are smoother and less curved; smoothness also increases with movement speed. Endpoint bias and variability are greater in, respectively, the high and low inertia directions. A robust dependence on movement speed was only observed in the longitudinal components of both bias and variability. The strongest and more consistent effects of perturbation were observed with negative viscosity (B−), which resulted in increased variability during force field adaptation and in a reduction of the endpoint bias, which was retained in the subsequent after-effect phase. These findings confirm that training with negative viscosity produces lasting effects in movement accuracy at all speeds. PMID:27965559

  12. Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.

    2016-12-01

    The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.

  13. Stochastic effects in EUV lithography: random, local CD variability, and printing failures

    NASA Astrophysics Data System (ADS)

    De Bisschop, Peter

    2017-10-01

    Stochastic effects in lithography are usually quantified through local CD variability metrics, such as line-width roughness or local CD uniformity (LCDU), and these quantities have been measured and studied intensively, both in EUV and optical lithography. Next to the CD-variability, stochastic effects can also give rise to local, random printing failures, such as missing contacts or microbridges in spaces. When these occur, there often is no (reliable) CD to be measured locally, and then such failures cannot be quantified with the usual CD-measuring techniques. We have developed algorithms to detect such stochastic printing failures in regular line/space (L/S) or contact- or dot-arrays from SEM images, leading to a stochastic failure metric that we call NOK (not OK), which we consider a complementary metric to the CD-variability metrics. This paper will show how both types of metrics can be used to experimentally quantify dependencies of stochastic effects to, e.g., CD, pitch, resist, exposure dose, etc. As it is also important to be able to predict upfront (in the OPC verification stage of a production-mask tape-out) whether certain structures in the layout are likely to have a high sensitivity to stochastic effects, we look into the feasibility of constructing simple predictors, for both stochastic CD-variability and printing failure, that can be calibrated for the process and exposure conditions used and integrated into the standard OPC verification flow. Finally, we briefly discuss the options to reduce stochastic variability and failure, considering the entire patterning ecosystem.

  14. DNA fingerprinting of Brassica juncea cultivars using microsatellite probes.

    PubMed

    Bhatia, S; Das, S; Jain, A; Lakshmikumaran, M

    1995-09-01

    The genetic variability in the Brassica juncea cultivars was detected by employing in-gel hybridization of restricted DNA to simple repetitive sequences such as (GATA)4, (GACA)4 and (CAC)5. The most informative probe/enzyme combination was (GATA)4/EcoRI, yielding highly polymorphic fingerprint patterns for the B. juncea cultivars. This technique was found to be dependable for establishing the variety specific patterns for most of the cultivars studied, a prerequisite for germplasm preservation. The results of the present study were compared with those reported in our earlier study in which random amplification of polymorphic DNA (RAPD) was used for assessing the genetic variability in the B. juncea cultivars.

  15. Hierarchical Hopping through Localized States in a Random Potential

    NASA Astrophysics Data System (ADS)

    Rajan, Harihar; Srivastava, Vipin

    2003-03-01

    Generalisation of Mott's idea on (low - temperature, large-time), Variable-range-hopping is considered to include hopping at some what higher temperature(that do not kill localization). These transitions complement the variable- range-hopping in that they do not conserve energy and occur at relatively lower time scales. The hopper picks the next state in a hierarchical fashion in accordance with certain conditions. The results are found to tie up nicely with an interesting property pertaining to the energy dependence of localized states. Acknowlwdgements: One of us(VS) would like to thank Association of Commonwealth Universities and Leverhulme Trust for financial help and to Sir Sam Edwards for hospitality at Cavendish Laboratory,Cambridge CB3 0HE.

  16. Alcohol-related incident guardianship and undergraduate college parties: enhancing the social norms marketing approach.

    PubMed

    Gilbertson, Troy A

    2006-01-01

    This randomized experiment examines the effects of contextual information on undergraduate college student's levels of alcohol-related incident guardianship at college parties. The research is conceptualized using routine activities theory and the theory of planned behavior. The experiment examines attitudinal variations about heavy drinking differentiated by sex, athletic status, and location of the drinking event. The sex and athletic status variables produce statistically effects on the dependent variables, while location of the drinking event is not significant. The article concludes by discussing the importance of context as it pertains to the social norms marketing strategy utilized in much college alcohol programming, and suggests a more directed marketing approach.

  17. DM-BLD: differential methylation detection using a hierarchical Bayesian model exploiting local dependency.

    PubMed

    Wang, Xiao; Gu, Jinghua; Hilakivi-Clarke, Leena; Clarke, Robert; Xuan, Jianhua

    2017-01-15

    The advent of high-throughput DNA methylation profiling techniques has enabled the possibility of accurate identification of differentially methylated genes for cancer research. The large number of measured loci facilitates whole genome methylation study, yet posing great challenges for differential methylation detection due to the high variability in tumor samples. We have developed a novel probabilistic approach, D: ifferential M: ethylation detection using a hierarchical B: ayesian model exploiting L: ocal D: ependency (DM-BLD), to detect differentially methylated genes based on a Bayesian framework. The DM-BLD approach features a joint model to capture both the local dependency of measured loci and the dependency of methylation change in samples. Specifically, the local dependency is modeled by Leroux conditional autoregressive structure; the dependency of methylation changes is modeled by a discrete Markov random field. A hierarchical Bayesian model is developed to fully take into account the local dependency for differential analysis, in which differential states are embedded as hidden variables. Simulation studies demonstrate that DM-BLD outperforms existing methods for differential methylation detection, particularly when the methylation change is moderate and the variability of methylation in samples is high. DM-BLD has been applied to breast cancer data to identify important methylated genes (such as polycomb target genes and genes involved in transcription factor activity) associated with breast cancer recurrence. A Matlab package of DM-BLD is available at http://www.cbil.ece.vt.edu/software.htm CONTACT: Xuan@vt.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Mechanical properties of 3D printed warped membranes

    NASA Astrophysics Data System (ADS)

    Kosmrlj, Andrej; Xiao, Kechao; Weaver, James C.; Vlassak, Joost J.; Nelson, David R.

    2015-03-01

    We explore how a frozen background metric affects the mechanical properties of solid planar membranes. Our focus is a special class of ``warped membranes'' with a preferred random height profile characterized by random Gaussian variables h (q) in Fourier space with zero mean and variance < | h (q) | 2 > q-m . It has been shown theoretically that in the linear response regime, this quenched random disorder increases the effective bending rigidity, while the Young's and shear moduli are reduced. Compared to flat plates of the same thickness t, the bending rigidity of warped membranes is increased by a factor hv / t while the in-plane elastic moduli are reduced by t /hv , where hv =√{< | h (x) | 2 > } describes the frozen height fluctuations. Interestingly, hv is system size dependent for warped membranes characterized with m > 2 . We present experimental tests of these predictions, using warped membranes prepared via high resolution 3D printing.

  19. Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces

    NASA Astrophysics Data System (ADS)

    Vacaru, S. I.

    2012-03-01

    We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.

  20. Basic Properties of Strong Mixing Conditions.

    DTIC Science & Technology

    1985-06-01

    H. Dehling and W. Philipp. Almost sure invariance principles for weakly dependent vector-valued random variables. Ann. Probab. 10 (1982) 689-701. 34...Harris chains will not be discussed here.) It is well known that every stationary Harris chain has a well defined " period " p E {1,2,3,... (the chain is...chain is absolutely regular. (ii) More generally, for any strictly stationary real Harris chain, lim n (o in) = 1 - 1/p where p is the period . A

  1. Secondary outcome analysis for data from an outcome-dependent sampling design.

    PubMed

    Pan, Yinghao; Cai, Jianwen; Longnecker, Matthew P; Zhou, Haibo

    2018-04-22

    Outcome-dependent sampling (ODS) scheme is a cost-effective way to conduct a study. For a study with continuous primary outcome, an ODS scheme can be implemented where the expensive exposure is only measured on a simple random sample and supplemental samples selected from 2 tails of the primary outcome variable. With the tremendous cost invested in collecting the primary exposure information, investigators often would like to use the available data to study the relationship between a secondary outcome and the obtained exposure variable. This is referred as secondary analysis. Secondary analysis in ODS designs can be tricky, as the ODS sample is not a random sample from the general population. In this article, we use the inverse probability weighted and augmented inverse probability weighted estimating equations to analyze the secondary outcome for data obtained from the ODS design. We do not make any parametric assumptions on the primary and secondary outcome and only specify the form of the regression mean models, thus allow an arbitrary error distribution. Our approach is robust to second- and higher-order moment misspecification. It also leads to more precise estimates of the parameters by effectively using all the available participants. Through simulation studies, we show that the proposed estimator is consistent and asymptotically normal. Data from the Collaborative Perinatal Project are analyzed to illustrate our method. Copyright © 2018 John Wiley & Sons, Ltd.

  2. An overview of in-home care for older people in Portugal: an empirical study about the customers.

    PubMed

    Martin, José Ignácio Guinaldo; de Oliveira, Laura Maria Alves; Duarte, Natália Sofia Correia

    2013-01-01

    The Portuguese in-home care services have never been adequately studied or identified. This is because of the lack of classification of variables related to the care receiver and to the demographic and organizational context in which it is inserted. The 126 organizations in the central region of Portugal were categorized into four groups depending on whether they were located in a rural or urban environment and on whether they were large or small organizations. To obtain information, the In-Home Care Protocol (ProSAD), Elderly Assessment System (EASYcare), and the Center for Epidemiologic Studies-Depression (CES-D) scale were applied to 48 customers (6 randomly chosen customers of in-home care services of each of the 8 randomly selected organizations, 2 per group of variables). The rural context denoted a lack of diversity of services and the number of organizations available is reduced which implies less time spent with the customers. The more dependent customers at the time of registration (Kruskal-Wallis test [KW] = 12.79; p < .05) in large organizations (Mann-Whitney [U] = 190.5; p < .05) benefit more from the services. In-home care services are underused and are oriented to treat those that have a family caregiver. Overall, in-home care in Portugal still has much to achieve when compared with other European countries.

  3. Within-day variability on short and long walking tests in persons with multiple sclerosis.

    PubMed

    Feys, Peter; Bibby, Bo; Romberg, Anders; Santoyo, Carme; Gebara, Benoit; de Noordhout, Benoit Maertens; Knuts, Kathy; Bethoux, Francois; Skjerbæk, Anders; Jensen, Ellen; Baert, Ilse; Vaney, Claude; de Groot, Vincent; Dalgas, Ulrik

    2014-03-15

    To compare within-day variability of short (10 m walking test at usual and fastest speed; 10MWT) and long (2 and 6-minute walking test; 2MWT/6MWT) tests in persons with multiple sclerosis. Observational study. MS rehabilitation and research centers in Europe and US within RIMS (European network for best practice and research in MS rehabilitation). Ambulatory persons with MS (Expanded Disability Status Scale 0-6.5). Subjects of different centers performed walking tests at 3 time points during a single day. 10MWT, 2MWT and 6MWT at fastest speed and 10MWT at usual speed. Ninety-five percent limits of agreement were computed using a random effects model with individual pwMS as random effect. Following this model, retest scores are with 95% certainty within these limits of baseline scores. In 102 subjects, within-day variability was constant in absolute units for the 10MWT, 2MWT and 6MWT at fastest speed (+/-0.26, 0.16 and 0.15m/s respectively, corresponding to +/-19.2m and +/-54 m for the 2MWT and 6MWT) independent on the severity of ambulatory dysfunction. This implies a greater relative variability with increasing disability level, often above 20% depending on the applied test. The relative within-day variability of the 10MWT at usual speed was +/-31% independent of ambulatory function. Absolute values of within-day variability on walking tests at fastest speed were independent of disability level and greater with short compared to long walking tests. Relative within-day variability remained overall constant when measured at usual speed. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  4. Alcohol-Adapted Anger Management Treatment: A Randomized Controlled Trial of an Innovative Therapy for Alcohol Dependence.

    PubMed

    Walitzer, Kimberly S; Deffenbacher, Jerry L; Shyhalla, Kathleen

    2015-12-01

    A randomized controlled trial for an innovative alcohol-adapted anger management treatment (AM) for outpatient alcohol dependent individuals scoring moderate or above on anger is described. AM treatment outcomes were compared to those of an empirically-supported intervention, Alcoholics Anonymous Facilitation treatment (AAF). Clients in AM, relative to clients in AAF, were hypothesized to have greater improvement in anger and anger-related cognitions and lesser AA involvement during the 6-month follow-up. Anger-related variables were hypothesized to be stronger predictors of improved alcohol outcomes in the AM treatment condition and AA involvement was hypothesized to be a stronger predictor of alcohol outcomes in the AAF treatment group. Seventy-six alcohol dependent men and women were randomly assigned to treatment condition and followed for 6 months after treatment end. Both AM and AAF treatments were followed by significant reductions in heavy drinking days, alcohol consequences, anger, and maladaptive anger-related thoughts and increases in abstinence and self-confidence regarding not drinking to anger-related triggers. Treatment with AAF was associated with greater AA involvement relative to treatment with AM. Changes in anger and AA involvement were predictive of posttreatment alcohol outcomes for both treatments. Change in trait anger was a stronger predictor of posttreatment alcohol consequences for AM than for AAF clients; during-treatment AA meeting attendance was a stronger predictor of posttreatment heavy drinking and alcohol consequences for AAF than for AM clients. Anger-related constructs and drinking triggers should be foci in treatment of alcohol dependence for anger-involved clients. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Alcohol-adapted Anger Management Treatment: A Randomized Controlled Trial of an Innovative Therapy for Alcohol Dependence

    PubMed Central

    Walitzer, Kimberly S.; Deffenbacher, Jerry L.; Shyhalla, Kathleen

    2015-01-01

    A randomized controlled trial for an innovative alcohol-adapted anger management treatment (AM) for outpatient alcohol dependent individuals scoring moderate or above on anger is described. AM treatment outcomes were compared to those of an empirically-supported intervention, Alcoholics Anonymous Facilitation treatment (AAF). Clients in AM, relative to clients in AAF, were hypothesized to have greater improvement in anger and anger-related cognitions and lesser AA involvement during the six-month follow-up. Anger-related variables were hypothesized to be stronger predictors of improved alcohol outcomes in the AM treatment condition and AA involvement was hypothesized to be a stronger predictor of alcohol outcomes in the AAF treatment group. Seventy-six alcohol dependent men and women were randomly assigned to treatment condition and followed for six months after treatment end. Both AM and AAF treatments were followed by significant reductions in heavy drinking days, alcohol consequences, anger, and maladaptive anger-related thoughts and increases in abstinence and self-confidence regarding not drinking to anger-related triggers. Treatment with AAF was associated with greater AA involvement relative to treatment with AM. Changes in anger and AA involvement were predictive of posttreatment alcohol outcomes for both treatments. Change in trait anger was a stronger predictor of posttreatment alcohol consequences for AM than for AAF clients; during-treatment AA meeting attendance was a stronger predictor of posttreatment heavy drinking and alcohol consequences for AAF than for AM clients. Anger-related constructs and drinking triggers should be foci in treatment of alcohol dependence for anger-involved clients. PMID:26387049

  6. Psychophysiological effects of massage-myofascial release after exercise: a randomized sham-control study.

    PubMed

    Arroyo-Morales, Manuel; Olea, Nicolas; Martínez, Marin Manuel; Hidalgo-Lozano, Amparo; Ruiz-Rodríguez, Concepción; Díaz-Rodríguez, Lourdes

    2008-12-01

    The aim of this study was to evaluate the effect of massage on neuromuscular recruitment, mood state, and mechanical nociceptive threshold (MNT) after high-intensity exercise. This was a prospective randomized clinical trial using between-groups design. The study was conducted at a university-based sports medicine clinic. Sixty-two (62) healthy active students age 18-26 participated. Participants, randomized into two groups, performed three 30-second Wingate tests and immediately received whole-body massage-myofascial induction or placebo (sham ultrasound/magnetotherapy) treatment. The duration (40 minutes), position, and therapist were the same for both treatments. Dependent variables were surface electromyography (sEMG) of quadriceps, profile of mood states (POMS) and mechanical nociceptive threshold (MNT) of trapezius and masseter muscles. These data were assessed at baseline and after exercise and recovery periods. Generalized estimating equations models were performed on dependent variables to assess differences between groups. Significant differences were found in effects of treatment on sEMG of Vastus Medialis (VM) (p = 0.02) and vigor subscale (p = 0.04). After the recovery period, there was a significant decrease in electromyographic (EMG) activity of VM (p = 0.02) in the myofascial-release group versus a nonsignificant increase in the placebo group (p = 0.32), and a decrease in vigor (p < 0.01) in the massage group versus no change in the placebo group (p = 0.86). Massage reduces EMG amplitude and vigor when applied as a passive recovery technique after a high-intensity exercise protocol. Massage may induce a transient loss of muscle strength or a change in the muscle fiber tension-length relationship, influenced by alterations of muscle function and a psychological state of relaxation.

  7. Influence of dental care systems on dental status. A comparison between two countries with different systems but similar living standards.

    PubMed

    Palmqvist, S; Söderfeldt, B; Vigild, M

    2001-03-01

    To evaluate the influence of two different dental care systems on dental status, taking into account relevant socio-economic factors. Questionnaire studies on randomly sampled subjects in Denmark and Sweden using questionnaire forms as identical as possible with regard to the different languages. The studies were performed late in 1998 in both countries. Questionnaires were sent to 1,175 subjects aged 45-69 years in Denmark (response rate 73%) and to 1,001 subjects aged 55-79 years in Sweden (response rate 67%). Questions about dental status and about socioeconomic factors and attitudes toward dental care were included. In logistic regression models, various dichotomies of dental conditions were used as dependent variables. State (Denmark vs. Sweden) was used as an independent variable together with socioeconomic factors and attitudes. There were great differences between the countries in dental status. In the regression model with 'wearing removable denture(s)' as the dependent variable, state was the strongest predictor with an OR of above 4.1 for Denmark compared to Sweden. much stronger than variables such as age, income, education and residence. The results indicate that the Swedish dental care system has been superior to the Danish one regarding dental status in middle aged and older populations in these two countries.

  8. Variability of multilevel switching in scaled hybrid RS/CMOS nanoelectronic circuits: theory

    NASA Astrophysics Data System (ADS)

    Heittmann, Arne; Noll, Tobias G.

    2013-07-01

    A theory is presented which describes the variability of multilevel switching in scaled hybrid resistive-switching/CMOS nanoelectronic circuits. Variability is quantified in terms of conductance variation using the first two moments derived from the probability density function (PDF) of the RS conductance. For RS, which are based on the electrochemical metallization effect (ECM), this variability is - to some extent - caused by discrete events such as electrochemical reactions, which occur on atomic scale and are at random. The theory shows that the conductance variation depends on the joint interaction between the programming circuit and the resistive switch (RS), and explicitly quantifies the impact of RS device parameters and parameters of the programming circuit on the conductance variance. Using a current mirror as an exemplary programming circuit an upper limit of 2-4 bits (dependent on the filament surface area) is estimated as the storage capacity exploiting the multilevel capabilities of an ECM cell. The theoretical results were verified by Monte Carlo circuit simulations on a standard circuit simulation environment using an ECM device model which models the filament growth by a Poisson process. Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.

  9. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    NASA Astrophysics Data System (ADS)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  10. Biologically-variable rhythmic auditory cues are superior to isochronous cues in fostering natural gait variability in Parkinson's disease.

    PubMed

    Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S

    2017-01-01

    Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p's<0.05). In contrast, cueing with isochronous or randomly varying inter-stimulus/beat intervals removed the LRC in the stride cycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p's<0.05) but not the positive effect of biological variability. Stimulus variability and patients' propensity to synchronize play a critical role in fostering healthier gait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. An instrumental variable random-coefficients model for binary outcomes

    PubMed Central

    Chesher, Andrew; Rosen, Adam M

    2014-01-01

    In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048

  12. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  13. Effects of practice schedule and task specificity on the adaptive process of motor learning.

    PubMed

    Barros, João Augusto de Camargo; Tani, Go; Corrêa, Umberto Cesar

    2017-10-01

    This study investigated the effects of practice schedule and task specificity based on the perspective of adaptive process of motor learning. For this purpose, tasks with temporal and force control learning requirements were manipulated in experiments 1 and 2, respectively. Specifically, the task consisted of touching with the dominant hand the three sequential targets with specific movement time or force for each touch. Participants were children (N=120), both boys and girls, with an average age of 11.2years (SD=1.0). The design in both experiments involved four practice groups (constant, random, constant-random, and random-constant) and two phases (stabilisation and adaptation). The dependent variables included measures related to the task goal (accuracy and variability of error of the overall movement and force patterns) and movement pattern (macro- and microstructures). Results revealed a similar error of the overall patterns for all groups in both experiments and that they adapted themselves differently in terms of the macro- and microstructures of movement patterns. The study concludes that the effects of practice schedules on the adaptive process of motor learning were both general and specific to the task. That is, they were general to the task goal performance and specific regarding the movement pattern. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Drop Spreading with Random Viscosity

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Jensen, Oliver

    2016-11-01

    Airway mucus acts as a barrier to protect the lung. However as a biological material, its physical properties are known imperfectly and can be spatially heterogeneous. In this study we assess the impact of these uncertainties on the rate of spreading of a drop (representing an inhaled aerosol) over a mucus film. We model the film as Newtonian, having a viscosity that depends linearly on the concentration of a passive solute (a crude proxy for mucin proteins). Given an initial random solute (and hence viscosity) distribution, described as a Gaussian random field with a given correlation structure, we seek to quantify the uncertainties in outcomes as the drop spreads. Using lubrication theory, we describe the spreading of the drop in terms of a system of coupled nonlinear PDEs governing the evolution of film height and the vertically-averaged solute concentration. We perform Monte Carlo simulations to predict the variability in the drop centre location and width (1D) or area (2D). We show how simulation results are well described (at much lower computational cost) by a low-order model using a weak disorder expansion. Our results show for example how variability in the drop location is a non-monotonic function of the solute correlation length increases. Engineering and Physical Sciences Research Council.

  15. Measures of Residual Risk with Connections to Regression, Risk Tracking, Surrogate Models, and Ambiguity

    DTIC Science & Technology

    2015-01-07

    vector that helps to manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original...measures of risk. They view a random variable of interest in concert with an auxiliary random vector that helps to manage , predict and mitigate the risk

  16. Vilazodone for Cannabis Dependence: A Randomized, Controlled Pilot Trial

    PubMed Central

    McRae-Clark, Aimee L.; Baker, Nathaniel L.; Gray, Kevin M.; Killeen, Therese; Hartwell, Karen J.; Simonian, Susan J.

    2016-01-01

    Background and Objectives The purpose of this study was to evaluate the efficacy of vilazodone, a selective serotonin receptor inhibitor and partial 5-HT1A agonist, for treatment of cannabis dependence. Methods Seventy-six cannabis-dependent adults were randomized to receive either up to 40 mg/day of vilazodone (n=41) or placebo (n=35) for eight weeks combined with a brief motivational enhancement therapy intervention and contingency management to encourage study retention. Cannabis use outcomes were assessed via weekly urine cannabinoid tests; secondary outcomes included cannabis use self-report and cannabis craving. Results Participants in both groups reported reduced self-reported cannabis use over the course of the study; however, vilazodone provided no advantage over placebo in reducing cannabis use. Men had significantly lower creatinine-adjusted cannabinoid levels and a trend for increased negative urine cannabinoid tests than women. Discussion and Conclusions Vilazodone was not more efficacious than placebo in reducing cannabis use. Important gender differences were noted, with women having worse cannabis use outcomes than men. Scientific Significance Further medication development efforts for cannabis use disorders are needed, and gender should be considered as an important variable in future trials. PMID:26685701

  17. Raw and Central Moments of Binomial Random Variables via Stirling Numbers

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…

  18. Copula-based model for rainfall and El- Niño in Banyuwangi Indonesia

    NASA Astrophysics Data System (ADS)

    Caraka, R. E.; Supari; Tahmid, M.

    2018-04-01

    Modelling, describing and measuring the structure dependences between different random events is at the very heart of statistics. Therefore, a broad variety of varying dependence concepts has been developed in the past. Most often, practitioners rely only on the linear correlation to describe the degree of dependence between two or more variables; an approach that can lead to quite misleading conclusions as this measure is only capable of capturing linear relationships. Copulas go beyond dependence measures and provide a sound framework for general dependence modelling. This paper will introduce an application of Copula to estimate, understand, and interpret the dependence structure in a given set of data El-Niño in Banyuwangi, Indonesia. In a nutshell, we proved the flexibility of Copulas Archimedean in rainfall modelling and catching phenomena of El Niño in Banyuwangi, East Java, Indonesia. Also, it was found that SST of nino3, nino4, and nino3.4 are most appropriate ENSO indicators in identifying the relationship of El Nino and rainfall.

  19. Local dependence in random graph models: characterization, properties and statistical inference

    PubMed Central

    Schweinberger, Michael; Handcock, Mark S.

    2015-01-01

    Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142

  20. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  1. Predictive factors for progression through the difficulty levels of Pilates exercises in patients with low back pain: a secondary analysis of a randomized controlled trial.

    PubMed

    Franco, Katherinne Ferro Moura; Franco, Yuri Rafael Dos Santos; Oliveira, Naiane Teixeira Bastos de; Padula, Rosimeire Simprini; Cabral, Cristina Maria Nunes

    2018-04-17

    The progression through the difficulty levels of Pilates exercises is a subjective criterion, that depends on the therapist's experience and ability to identify the best moment to progress to the next level. To identify the factors that interfere in the progression through the difficulty levels of the Pilates exercises in patients with chronic nonspecific low back pain. Data from 139 patients with chronic nonspecific low back pain from a randomized controlled trial were used for statistical analysis using binary logistic regression. The dependent variable was the progression through the difficulty levels, and the independent variables were age, gender, educational level, low back pain duration, pain intensity, general disability, kinesiophobia, previous physical activity, and number of absences. The factors that interfered in the progression through the difficulty levels were previous physical inactivity (odds ratio [OR]=5.14, 95% confidence interval [CI]: 1.53-17.31), low educational level (OR=2.62, 95% CI: 1.12-6.10), more advanced age (OR=0.95, 95% CI: 0.92-0.98) and more absences (OR=0.63, 95% CI: 0.50-0.79). These variables explain 41% of the non-progression through the difficulty level of the exercises. Physical inactivity, low educational level, more advanced age and greater number of absences can be interfering factors in the progression through the difficulty levels of the Pilates exercises in patients with chronic nonspecific low back pain. Copyright © 2018 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.

  2. Factors affecting the diffusion of online end user literature searching.

    PubMed

    Ash, J S

    1999-01-01

    The aim of this study was to identify factors that affect diffusion of usage of online end user literature searching. Fifteen factors clustered into three attribute sets (innovation attributes, organizational attributes, and marketing attributes) were measured to study their effect on the diffusion of online searching within institutions. A random sample of sixty-seven academic health sciences centers was selected and then 1,335 library and informatics staff members at those institutions were surveyed by mail with electronic mail follow-up. Multiple regression analysis was performed. The survey yielded a 41% response rate with electronic mail follow-up being particularly effective. Two dependent variables, internal diffusion (spread of diffusion) and infusion (depth of diffusion), were measured. There was little correlation between them, indicating they measured different things. Fifteen independent variables clustered into three attribute sets were measured. The innovation attributes set was significant for both internal diffusion and infusion. Significant individual variables were visibility for internal diffusion and image enhancement effects (negative relation) as well as visibility for infusion (depth of diffusion). Organizational attributes were also significant predictors for both dependent variables. No individual variables were significant for internal diffusion. Communication, management support (negative relation), rewards, and existence of champions were significant for infusion. Marketing attributes were not significant predictors. Successful diffusion of online end user literature searching is dependent on the visibility of the systems, communication among, rewards to, and peers of possible users who promote use (champions). Personal image enhancement effects have a negative relation to infusion, possibly because the use of intermediaries is still seen as the more luxurious way to have searches done. Management support also has a negative relation to infusion, perhaps indicating that depth of diffusion can increase despite top-level management actions.

  3. Experimental Investigations of Non-Stationary Properties In Radiometer Receivers Using Measurements of Multiple Calibration References

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)

    2002-01-01

    Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.

  4. Effect of short-term estrogen therapy on endothelial function: a double-blinded, randomized, controlled trial.

    PubMed

    Hurtado, R; Celani, M; Geber, S

    2016-10-01

    To evaluate the effect of short-term hormone replacement therapy with 0.625 mg conjugated estrogens daily on endothelial function of healthy postmenopausal women, using flow-mediated dilation (FMD) of the brachial artery. We performed a double-blinded, randomized, controlled trial over 3 years. Randomization was performed using computer-generated sorting. All participants were blinded to the use of conjugated equine estrogens (CEE) or placebo and FMD was assessed by a blinded examiner, before and after 28 days of medication. A total of 64 healthy postmenopausal women were selected and randomly assigned into two groups of treatment: 0.625 mg of CEE or placebo. FMD values were statistically different between the groups (p = 0.025): the group receiving CEE showed a FMD value of 0.011 compared to the placebo group (FMD = -0.082). The two groups were additionally evaluated for homogeneity through the Shapiro-Wilk test in respect to variables that could interfere with endothelial function such as age (p = 0.729), body mass index (p = 0.891), and time since menopause (p = 0.724). Other variables were excluded during selection of the participants such as chronic vascular conditions, smoking, and sedentary lifestyle. Our results demonstrate that the administration of 0.625 mg CEE for 28 days is effective in improving vascular nitric oxide-dependent dilation assessed by FMD of the brachial artery in postmenopausal women. NCT01482416.

  5. A Random Variable Related to the Inversion Vector of a Partial Random Permutation

    ERIC Educational Resources Information Center

    Laghate, Kavita; Deshpande, M. N.

    2005-01-01

    In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.

  6. [Carotid Stenting in France after the EVA 3S and SPACE publications].

    PubMed

    Beyssen, B; Rousseau, H; Bracard, S; Sapoval, M; Gaux, J-C

    2007-01-01

    Angioplasty of stenoses of the carotid bifurcation is a revascularization procedure that is used successfully in many patients. With more than 10 years of experience now, the feasibility of carotid stenting has been demonstrated. Its distribution is highly variable depending on the country, with a mean penetration rate in Europe of 15% of the number of carotid revascularizations. However, the complication rate is highly variable from one series to another and depends on the type of patient treated and the operator's learning curve. The results of the first two randomized studies comparing endarterectomy and carotid stenting, EVA 3S in France and SPACE in Germany, have just been published. The conclusions of these studies only relate to symptomatic patients, who make up a small proportion of revascularized patients. At 30 days, the French study concluded that surgery was better, and the German study showed no advantage to stenting. The analysis of these results compared to other publications should make it possible to best define the current indications for carotid stenting.

  7. A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables

    ERIC Educational Resources Information Center

    Vernizzi, Graziano; Nakai, Miki

    2015-01-01

    It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…

  8. Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    Depending on detector number, there are random fluctuations in the background level for spectral band 1 of magnitudes ranging from 2 to 3.5 digital numbers (DN). Similar variability is observed in all the other reflective bands, but with smaller magnitude in the range 0.5 to 2.5 DN. Observations of background reference levels show that line dependent variations in raw TM image data and in the associated calibration data can be measured and corrected within an operational environment by applying simple offset corrections on a line-by-line basis. The radiometric calibration procedure defined by the Canadian Center for Remote Sensing was revised accordingly in order to prevent striping in the output product.

  9. Dimensional Reduction for the General Markov Model on Phylogenetic Trees.

    PubMed

    Sumner, Jeremy G

    2017-03-01

    We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.

  10. Extreme value analysis in biometrics.

    PubMed

    Hüsler, Jürg

    2009-04-01

    We review some approaches of extreme value analysis in the context of biometrical applications. The classical extreme value analysis is based on iid random variables. Two different general methods are applied, which will be discussed together with biometrical examples. Different estimation, testing, goodness-of-fit procedures for applications are discussed. Furthermore, some non-classical situations are considered where the data are possibly dependent, where a non-stationary behavior is observed in the data or where the observations are not univariate. A few open problems are also stated.

  11. Variable step random walks, self-similar distributions, and pricing of options (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Gunaratne, Gemunu H.; McCauley, Joseph L.

    2005-05-01

    A new theory for pricing of options is presented. It is based on the assumption that successive movements depend on the value of the return. The solution to the Fokker-Planck equation is shown to be an asymmetric exponential distribution, similar to those observed in intra-day currency markets. The "volatility smile", used by traders to correct the Black-Scholes pricing is shown to be a heuristic mechanism to implement options pricing formulae derived from our theory.

  12. Characterizing Ship Navigation Patterns Using Automatic Identification System (AIS) Data in the Baltic Sea

    DTIC Science & Technology

    in the Saint Petersburg area. We use three random forest models, that differ in their use of past information , to predict a vessels next port of visit...network where past information is used to more accurately predict the future state. The transitional probabilities change when predictor variables are...added that reach deeper into the past. Our findings suggest that successful prediction of the movement of a vessel depends on having accurate information on its recent history.

  13. Efficient Estimation of Mutual Information for Strongly Dependent Variables

    DTIC Science & Technology

    2015-05-11

    the two possibilities: for a fixed dimension d and near- est neighbor parameter k, we find a constant ↵ k,d , such that if V̄ (i)/V (i) < ↵ k,d , then...also compare the results to several baseline estima- tors: KSG (Kraskov et al., 2004), generalized near- est neighbor graph (GNN) (Pál et al., 2010...Amaury Lendasse, and Francesco Corona. A boundary corrected expansion of the moments of near- est neighbor distributions. Random Struct. Algorithms

  14. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. I: Spike Generating Models on Converging Motifs

    PubMed Central

    Bi, Zedong; Zhou, Changsong

    2016-01-01

    In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP) when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis). Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons). Neurons (including the post-synaptic neuron) in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV) induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV) induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1) synchronous firing and burstiness tend to increase DiffV, (2) heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3) heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our work important for understanding functional processes of neuronal networks (such as memory) and neural development. PMID:26941634

  15. Tigers on trails: occupancy modeling for cluster sampling.

    PubMed

    Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U

    2010-07-01

    Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.

  16. Generated effect modifiers (GEM's) in randomized clinical trials.

    PubMed

    Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R Todd

    2017-01-01

    In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an "effect modifier". Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  18. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  19. A comparison of multiple imputation methods for handling missing values in longitudinal data in the presence of a time-varying covariate with a non-linear association with time: a simulation study.

    PubMed

    De Silva, Anurika Priyanjali; Moreno-Betancur, Margarita; De Livera, Alysha Madhu; Lee, Katherine Jane; Simpson, Julie Anne

    2017-07-25

    Missing data is a common problem in epidemiological studies, and is particularly prominent in longitudinal data, which involve multiple waves of data collection. Traditional multiple imputation (MI) methods (fully conditional specification (FCS) and multivariate normal imputation (MVNI)) treat repeated measurements of the same time-dependent variable as just another 'distinct' variable for imputation and therefore do not make the most of the longitudinal structure of the data. Only a few studies have explored extensions to the standard approaches to account for the temporal structure of longitudinal data. One suggestion is the two-fold fully conditional specification (two-fold FCS) algorithm, which restricts the imputation of a time-dependent variable to time blocks where the imputation model includes measurements taken at the specified and adjacent times. To date, no study has investigated the performance of two-fold FCS and standard MI methods for handling missing data in a time-varying covariate with a non-linear trajectory over time - a commonly encountered scenario in epidemiological studies. We simulated 1000 datasets of 5000 individuals based on the Longitudinal Study of Australian Children (LSAC). Three missing data mechanisms: missing completely at random (MCAR), and a weak and a strong missing at random (MAR) scenarios were used to impose missingness on body mass index (BMI) for age z-scores; a continuous time-varying exposure variable with a non-linear trajectory over time. We evaluated the performance of FCS, MVNI, and two-fold FCS for handling up to 50% of missing data when assessing the association between childhood obesity and sleep problems. The standard two-fold FCS produced slightly more biased and less precise estimates than FCS and MVNI. We observed slight improvements in bias and precision when using a time window width of two for the two-fold FCS algorithm compared to the standard width of one. We recommend the use of FCS or MVNI in a similar longitudinal setting, and when encountering convergence issues due to a large number of time points or variables with missing values, the two-fold FCS with exploration of a suitable time window.

  20. A Randomized Clinical Trial of Methadone Maintenance for Prisoners: Prediction of Treatment Entry and Completion in Prison

    PubMed Central

    GORDON, MICHAEL S.; KINLOCK, TIMOTHY W.; COUVILLION, KATHRYN A.; SCHWARTZ, ROBERT P.; O’GRADY, KEVIN

    2014-01-01

    The present report is an intent-to-treat analysis involving secondary data drawn from the first randomized clinical trial of prison-initiated methadone in the United States. This study examined predictors of treatment entry and completion in prison. A sample of 211 adult male prerelease inmates with preincarceration heroin dependence were randomly assigned to one of three treatment conditions: counseling only (counseling in prison; n= 70); counseling plus transfer (counseling in prison with transfer to methadone maintenance treatment upon release; n= 70); and counseling plus methadone (methadone maintenance in prison, continued in a community-based methadone maintenance program upon release; n= 71). Entered prison treatment (p <. 01), and completed prison treatment (p< .001) were significantly predicted by the set of 10 explanatory variables and favored the treatment conditions receiving methadone. The present results indicate that individuals who are older in age and have longer prison sentences may have better outcomes than younger individuals with shorter sentences, meaning they are more likely to enter and complete prison-based treatment. Furthermore, implications for the treatment of prisoners with prior heroin dependence and for conducting clinical trials may indicate the importance of examining individual characteristics and the possibility of the examination of patient preference. PMID:25392605

  1. A Randomized Clinical Trial of Methadone Maintenance for Prisoners: Prediction of Treatment Entry and Completion in Prison.

    PubMed

    Gordon, Michael S; Kinlock, Timothy W; Couvillion, Kathryn A; Schwartz, Robert P; O'Grady, Kevin

    2012-05-01

    The present report is an intent-to-treat analysis involving secondary data drawn from the first randomized clinical trial of prison-initiated methadone in the United States. This study examined predictors of treatment entry and completion in prison. A sample of 211 adult male prerelease inmates with preincarceration heroin dependence were randomly assigned to one of three treatment conditions: counseling only (counseling in prison; n= 70); counseling plus transfer (counseling in prison with transfer to methadone maintenance treatment upon release; n= 70); and counseling plus methadone (methadone maintenance in prison, continued in a community-based methadone maintenance program upon release; n= 71). Entered prison treatment (p <. 01), and completed prison treatment (p< .001) were significantly predicted by the set of 10 explanatory variables and favored the treatment conditions receiving methadone. The present results indicate that individuals who are older in age and have longer prison sentences may have better outcomes than younger individuals with shorter sentences, meaning they are more likely to enter and complete prison-based treatment. Furthermore, implications for the treatment of prisoners with prior heroin dependence and for conducting clinical trials may indicate the importance of examining individual characteristics and the possibility of the examination of patient preference.

  2. Cotunneling and polaronic effect in granular systems

    NASA Astrophysics Data System (ADS)

    Ioselevich, A. S.; Sivak, V. V.

    2017-06-01

    We theoretically study the conductivity in arrays of metallic grains due to the variable-range multiple cotunneling of electrons with short-range (screened) Coulomb interaction. The system is supposed to be coupled to random stray charges in the dielectric matrix that are only loosely bounded to their spatial positions by elastic forces. The flexibility of the stray charges gives rise to a polaronic effect, which leads to the onset of Arrhenius-type conductivity behavior at low temperatures, replacing conventional Mott variable-range hopping. The effective activation energy logarithmically depends on temperature due to fluctuations of the polaron barrier heights. We present the unified theory that covers both weak and strong polaron effect regimes of hopping in granular metals and describes the crossover from elastic to inelastic cotunneling.

  3. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    PubMed

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  4. Do religion and religiosity have anything to do with alcohol consumption patterns? Evidence from two fish landing sites on Lake Victoria Uganda.

    PubMed

    Tumwesigye, Nazarius M; Atuyambe, Lynn; Kibira, Simon P S; Wabwire-Mangen, Fred; Tushemerirwe, Florence; Wagner, Glenn J

    2013-09-01

    Fish landing sites have high levels of harmful use of alcohol. This paper examines the role of religion and religiosity on alcohol consumption at two fish landing sites on Lake Victoria in Uganda. Questionnaires were administered to randomly selected people at the sites. Dependent variables included alcohol consumption during the previous 30 days, whereas the key independent variables were religion and religiosity. Bivariate and multivariate analysis techniques were applied. People reporting low religiosity were five times more likely to have consumed alcohol (95% confidence interval: 2.45-10.04) compared with those reporting low/average religiosity. Religion and religiosity are potential channels for controlling alcohol use.

  5. ARIMA representation for daily solar irradiance and surface air temperature time series

    NASA Astrophysics Data System (ADS)

    Kärner, Olavi

    2009-06-01

    Autoregressive integrated moving average (ARIMA) models are used to compare long-range temporal variability of the total solar irradiance (TSI) at the top of the atmosphere (TOA) and surface air temperature series. The comparison shows that one and the same type of the model is applicable to represent the TSI and air temperature series. In terms of the model type surface air temperature imitates closely that for the TSI. This may mean that currently no other forcing to the climate system is capable to change the random walk type variability established by the varying activity of the rotating Sun. The result should inspire more detailed examination of the dependence of various climate series on short-range fluctuations of TSI.

  6. Mitochondria and the non-genetic origins of cell-to-cell variability: More is different.

    PubMed

    Guantes, Raúl; Díaz-Colunga, Juan; Iborra, Francisco J

    2016-01-01

    Gene expression activity is heterogeneous in a population of isogenic cells. Identifying the molecular basis of this variability will improve our understanding of phenomena like tumor resistance to drugs, virus infection, or cell fate choice. The complexity of the molecular steps and machines involved in transcription and translation could introduce sources of randomness at many levels, but a common constraint to most of these processes is its energy dependence. In eukaryotic cells, most of this energy is provided by mitochondria. A clonal population of cells may show a large variability in the number and functionality of mitochondria. Here, we discuss how differences in the mitochondrial content of each cell contribute to heterogeneity in gene products. Changes in the amount of mitochondria can also entail drastic alterations of a cell's gene expression program, which ultimately leads to phenotypic diversity. Also watch the Video Abstract. © 2015 WILEY Periodicals, Inc.

  7. Spatial vs. individual variability with inheritance in a stochastic Lotka-Volterra system

    NASA Astrophysics Data System (ADS)

    Dobramysl, Ulrich; Tauber, Uwe C.

    2012-02-01

    We investigate a stochastic spatial Lotka-Volterra predator-prey model with randomized interaction rates that are either affixed to the lattice sites and quenched, and / or specific to individuals in either population. In the latter situation, we include rate inheritance with mutations from the particles' progenitors. Thus we arrive at a simple model for competitive evolution with environmental variability and selection pressure. We employ Monte Carlo simulations in zero and two dimensions to study the time evolution of both species' densities and their interaction rate distributions. The predator and prey concentrations in the ensuing steady states depend crucially on the environmental variability, whereas the temporal evolution of the individualized rate distributions leads to largely neutral optimization. Contrary to, e.g., linear gene expression models, this system does not experience fixation at extreme values. An approximate description of the resulting data is achieved by means of an effective master equation approach for the interaction rate distribution.

  8. A Tutorial in Bayesian Potential Outcomes Mediation Analysis.

    PubMed

    Miočević, Milica; Gonzalez, Oscar; Valente, Matthew J; MacKinnon, David P

    2018-01-01

    Statistical mediation analysis is used to investigate intermediate variables in the relation between independent and dependent variables. Causal interpretation of mediation analyses is challenging because randomization of subjects to levels of the independent variable does not rule out the possibility of unmeasured confounders of the mediator to outcome relation. Furthermore, commonly used frequentist methods for mediation analysis compute the probability of the data given the null hypothesis, which is not the probability of a hypothesis given the data as in Bayesian analysis. Under certain assumptions, applying the potential outcomes framework to mediation analysis allows for the computation of causal effects, and statistical mediation in the Bayesian framework gives indirect effects probabilistic interpretations. This tutorial combines causal inference and Bayesian methods for mediation analysis so the indirect and direct effects have both causal and probabilistic interpretations. Steps in Bayesian causal mediation analysis are shown in the application to an empirical example.

  9. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  10. Coexisting with dependence and well-being: the results of a pilot study intervention on 75-99-year-old individuals.

    PubMed

    Rodríguez-Díaz, M Teresa; Pérez-Marfil, M Nieves; Cruz-Quintana, Francisco

    2016-12-01

    The objective of this study is to design and implement an intervention program centered on preventing functional dependence. A pre/post quasi-experimental (typical case) design study with a control group was conducted on a group of 75-90-year-old individuals with functional dependence (n = 59) at three nursing homes in Madrid (Spain). The intervention program consists of two types of activities developed simultaneously. Some focused on emotional well-being (nine 90-minute sessions, once per week), whereas others focused on improving participants' physical condition (two 30-minute sessions, twice per week). The simple randomized participants included 59 elderly individuals (Intervention Group = 30, Control Group = 29) (mean age 86.80) [SD, 5. 19]. Fifty-nine participants were analyzed. The results indicate that the program is effective in improving mood, lowering anxiety levels (d = 0.81), and increasing both self-esteem (d = 0.65) and the perception of self-efficacy (d = 1.04). There are improvements in systolic pressure and functional dependence levels are maintained. Linear simple regression (independent variable pre-Barthel) shows that the pre-intervention dependence level can predict self-esteem after the intervention. We have demonstrated that the program is innovative with regard to bio-psychosocial care in elderly individuals, is based on actual practice, and is effective in increasing both self-esteem and self-efficacy. These variables positively affect functional capabilities and delay functional dependence.

  11. Multiple Scattering in Random Mechanical Systems and Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun

    2013-10-01

    This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.

  12. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  13. Memory consolidation and contextual interference effects with computer games.

    PubMed

    Shewokis, Patricia A

    2003-10-01

    Some investigators of the contextual interference effect contend that there is a direct relation between the amount of practice and the contextual interference effect based on the prediction that the improvement in learning tasks in a random practice schedule, compared to a blocked practice schedule, increases in magnitude as the amount of practice during acquisition on the tasks increases. Research using computer games in contextual interference studies has yielded a large effect (f = .50) with a random practice schedule advantage during transfer. These investigations had a total of 36 and 72 acquisition trials, respectively. The present study tested this prediction by having 72 college students, who were randomly assigned to a blocked or random practice schedule, practice 102 trials of three computer-game tasks across three days. After a 24-hr. interval, 6 retention and 5 transfer trials were performed. Dependent variables were time to complete an event in seconds and number of errors. No significant differences were found for retention and transfer. These results are discussed in terms of how the amount of practice, task-related factors, and memory consolidation mediate the contextual interference effect.

  14. On the relationship between ecosystem-scale hyperspectral reflectance and CO2 exchange in European mountain grasslands

    NASA Astrophysics Data System (ADS)

    Balzarolo, M.; Vescovo, L.; Hammerle, A.; Gianelle, D.; Papale, D.; Tomelleri, E.; Wohlfahrt, G.

    2015-05-01

    In this paper we explore the skill of hyperspectral reflectance measurements and vegetation indices (VIs) derived from these in estimating carbon dioxide (CO2) fluxes of grasslands. Hyperspectral reflectance data, CO2 fluxes and biophysical parameters were measured at three grassland sites located in European mountain regions using standardized protocols. The relationships between CO2 fluxes, ecophysiological variables, traditional VIs and VIs derived using all two-band combinations of wavelengths available from the whole hyperspectral data space were analysed. We found that VIs derived from hyperspectral data generally explained a large fraction of the variability in the investigated dependent variables but differed in their ability to estimate midday and daily average CO2 fluxes and various derived ecophysiological parameters. Relationships between VIs and CO2 fluxes and ecophysiological parameters were site-specific, likely due to differences in soils, vegetation parameters and environmental conditions. Chlorophyll and water-content-related VIs explained the largest fraction of variability in most of the dependent variables. Band selection based on a combination of a genetic algorithm with random forests (GA-rF) confirmed that it is difficult to select a universal band region suitable across the investigated ecosystems. Our findings have major implications for upscaling terrestrial CO2 fluxes to larger regions and for remote- and proximal-sensing sampling and analysis strategies and call for more cross-site synthesis studies linking ground-based spectral reflectance with ecosystem-scale CO2 fluxes.

  15. Estimations of natural variability between satellite measurements of trace species concentrations

    NASA Astrophysics Data System (ADS)

    Sheese, P.; Walker, K. A.; Boone, C. D.; Degenstein, D. A.; Kolonjari, F.; Plummer, D. A.; von Clarmann, T.

    2017-12-01

    In order to validate satellite measurements of atmospheric states, it is necessary to understand the range of random and systematic errors inherent in the measurements. On occasions where the measurements do not agree within those errors, a common "go-to" explanation is that the unexplained difference can be chalked up to "natural variability". However, the expected natural variability is often left ambiguous and rarely quantified. This study will look to quantify the expected natural variability of both O3 and NO2 between two satellite instruments: ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) and OSIRIS (Optical Spectrograph and Infrared Imaging System). By sampling the CMAM30 (30-year specified dynamics simulation of the Canadian Middle Atmosphere Model) climate chemistry model throughout the upper troposphere and stratosphere at times and geolocations of coincident ACE-FTS and OSIRIS measurements at varying coincidence criteria, height-dependent expected values of O3 and NO2 variability will be estimated and reported on. The results could also be used to better optimize the coincidence criteria used in satellite measurement validation studies.

  16. Analytic methods for questions pertaining to a randomized pretest, posttest, follow-up design.

    PubMed

    Rausch, Joseph R; Maxwell, Scott E; Kelley, Ken

    2003-09-01

    Delineates 5 questions regarding group differences that are likely to be of interest to researchers within the framework of a randomized pretest, posttest, follow-up (PPF) design. These 5 questions are examined from a methodological perspective by comparing and discussing analysis of variance (ANOVA) and analysis of covariance (ANCOVA) methods and briefly discussing hierarchical linear modeling (HLM) for these questions. This article demonstrates that the pretest should be utilized as a covariate in the model rather than as a level of the time factor or as part of the dependent variable within the analysis of group differences. It is also demonstrated that how the posttest and the follow-up are utilized in the analysis of group differences is determined by the specific question asked by the researcher.

  17. Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions

    NASA Astrophysics Data System (ADS)

    Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia

    2018-03-01

    Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.

  18. Qualitatively Assessing Randomness in SVD Results

    NASA Astrophysics Data System (ADS)

    Lamb, K. W.; Miller, W. P.; Kalra, A.; Anderson, S.; Rodriguez, A.

    2012-12-01

    Singular Value Decomposition (SVD) is a powerful tool for identifying regions of significant co-variability between two spatially distributed datasets. SVD has been widely used in atmospheric research to define relationships between sea surface temperatures, geopotential height, wind, precipitation and streamflow data for myriad regions across the globe. A typical application for SVD is to identify leading climate drivers (as observed in the wind or pressure data) for a particular hydrologic response variable such as precipitation, streamflow, or soil moisture. One can also investigate the lagged relationship between a climate variable and the hydrologic response variable using SVD. When performing these studies it is important to limit the spatial bounds of the climate variable to reduce the chance of random co-variance relationships being identified. On the other hand, a climate region that is too small may ignore climate signals which have more than a statistical relationship to a hydrologic response variable. The proposed research seeks to identify a qualitative method of identifying random co-variability relationships between two data sets. The research identifies the heterogeneous correlation maps from several past results and compares these results with correlation maps produced using purely random and quasi-random climate data. The comparison identifies a methodology to determine if a particular region on a correlation map may be explained by a physical mechanism or is simply statistical chance.

  19. Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data.

    PubMed

    Rahman, Shah Atiqur; Huang, Yuxiao; Claassen, Jan; Heintzman, Nathaniel; Kleinberg, Samantha

    2015-12-01

    Most clinical and biomedical data contain missing values. A patient's record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across time as well, but current methods have yet to incorporate these temporal relationships as well as multiple types of missingness. To address this, we propose an imputation method (FLk-NN) that incorporates time lagged correlations both within and across variables by combining two imputation methods, based on an extension to k-NN and the Fourier transform. This enables imputation of missing values even when all data at a time point is missing and when there are different types of missingness both within and across variables. In comparison to other approaches on three biological datasets (simulated and actual Type 1 diabetes datasets, and multi-modality neurological ICU monitoring) the proposed method has the highest imputation accuracy. This was true for up to half the data being missing and when consecutive missing values are a significant fraction of the overall time series length. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    PubMed Central

    Chevalier, Michael W.; El-Samad, Hana

    2014-01-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled. PMID:25481130

  1. Coverage dependent molecular assembly of anthraquinone on Au(111)

    NASA Astrophysics Data System (ADS)

    DeLoach, Andrew S.; Conrad, Brad R.; Einstein, T. L.; Dougherty, Daniel B.

    2017-11-01

    A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.

  2. Coverage dependent molecular assembly of anthraquinone on Au(111).

    PubMed

    DeLoach, Andrew S; Conrad, Brad R; Einstein, T L; Dougherty, Daniel B

    2017-11-14

    A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.

  3. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    NASA Astrophysics Data System (ADS)

    Chevalier, Michael W.; El-Samad, Hana

    2014-12-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.

  4. Comparison of structured and unstructured physical activity training on predicted VO2max and heart rate variability in adolescents - a randomized control trial.

    PubMed

    Sharma, Vivek Kumar; Subramanian, Senthil Kumar; Radhakrishnan, Krishnakumar; Rajendran, Rajathi; Ravindran, Balasubramanian Sulur; Arunachalam, Vinayathan

    2017-05-01

    Physical inactivity contributes to many health issues. The WHO-recommended physical activity for adolescents encompasses aerobic, resistance, and bone strengthening exercises aimed at achieving health-related physical fitness. Heart rate variability (HRV) and maximal aerobic capacity (VO2max) are considered as noninvasive measures of cardiovascular health. The objective of this study is to compare the effect of structured and unstructured physical training on maximal aerobic capacity and HRV among adolescents. We designed a single blinded, parallel, randomized active-controlled trial (Registration No. CTRI/2013/08/003897) to compare the physiological effects of 6 months of globally recommended structured physical activity (SPA), with that of unstructured physical activity (USPA) in healthy school-going adolescents. We recruited 439 healthy student volunteers (boys: 250, girls: 189) in the age group of 12-17 years. Randomization across the groups was done using age and gender stratified randomization method, and the participants were divided into two groups: SPA (n=219, boys: 117, girls: 102) and USPA (n=220, boys: 119, girls: 101). Depending on their training status and gender the participants in both SPA and USPA groups were further subdivided into the following four sub-groups: SPA athlete boys (n=22) and girls (n=17), SPA nonathlete boys (n=95) and girls (n=85), USPA athlete boys (n=23) and girls (n=17), and USPA nonathlete boys (n=96) and girls (n=84). We recorded HRV, body fat%, and VO2 max using Rockport Walk Fitness test before and after the intervention. Maximum aerobic capacity and heart rate variability increased significantly while heart rate, systolic blood pressure, diastolic blood pressure, and body fat percentage decreased significantly after both SPA and USPA intervention. However, the improvement was more in SPA as compared to USPA. SPA is more beneficial for improving cardiorespiratory fitness, HRV, and reducing body fat percentage in terms of magnitude than USPA in adolescent individuals irrespective of their gender and sports activities.

  5. Uncertainty analysis of geothermal energy economics

    NASA Astrophysics Data System (ADS)

    Sener, Adil Caner

    This dissertation research endeavors to explore geothermal energy economics by assessing and quantifying the uncertainties associated with the nature of geothermal energy and energy investments overall. The study introduces a stochastic geothermal cost model and a valuation approach for different geothermal power plant development scenarios. The Monte Carlo simulation technique is employed to obtain probability distributions of geothermal energy development costs and project net present values. In the study a stochastic cost model with incorporated dependence structure is defined and compared with the model where random variables are modeled as independent inputs. One of the goals of the study is to attempt to shed light on the long-standing modeling problem of dependence modeling between random input variables. The dependence between random input variables will be modeled by employing the method of copulas. The study focuses on four main types of geothermal power generation technologies and introduces a stochastic levelized cost model for each technology. Moreover, we also compare the levelized costs of natural gas combined cycle and coal-fired power plants with geothermal power plants. The input data used in the model relies on the cost data recently reported by government agencies and non-profit organizations, such as the Department of Energy, National Laboratories, California Energy Commission and Geothermal Energy Association. The second part of the study introduces the stochastic discounted cash flow valuation model for the geothermal technologies analyzed in the first phase. In this phase of the study, the Integrated Planning Model (IPM) software was used to forecast the revenue streams of geothermal assets under different price and regulation scenarios. These results are then combined to create a stochastic revenue forecast of the power plants. The uncertainties in gas prices and environmental regulations will be modeled and their potential impacts will be captured in the valuation model. Finally, the study will compare the probability distributions of development cost and project value and discusses the market penetration potential of the geothermal power generation. There is a recent world wide interest in geothermal utilization projects. There are several reasons for the recent popularity of geothermal energy, including the increasing volatility of fossil fuel prices, need for domestic energy sources, approaching carbon emission limitations and state renewable energy standards, increasing need for baseload units, and new technology to make geothermal energy more attractive for power generation. It is our hope that this study will contribute to the recent progress of geothermal energy by shedding light on the uncertainty of geothermal energy project costs.

  6. Influence of clinical baseline findings on the survival of 2 post systems: a randomized clinical trial.

    PubMed

    Schmitter, Marc; Rammelsberg, Peter; Gabbert, Olaf; Ohlmann, Brigitte

    2007-01-01

    The aim of this prospective randomized controlled trial was to evaluate the influence of clinical baseline characteristics on the survival of 2 post systems. One hundred patients needing a post were included. Half the patients received a glass fiber-reinforced post (FRP), and the other half received metal screw posts (MSP). The posts were assigned randomly. In addition to demographic data, the following parameters were recorded: type of tooth (incisor/canine versus molar/premolar), length of the post in relation to root length (percentage), extent of coronal tooth destruction (percentage), ferrule height (in millimeters), type of restoration (fixed or removable partial denture), and presence of antagonistic contacts (yes/no). After at least 1 year (mean: 13.84 months), the patients were recalled. Statistical analysis was performed using the log-rank test and Cox regression analysis. The survival rate of FRPs was 93.5%. In the MSP group, the survival rate was significantly lower (75.6%; log-rank test, P = .049). Additionally, the metal posts were associated with more unfavorable complications, for example, root fracture. The type of the tooth and the degree of coronal tooth destruction influenced the survival of MSPs, whereas no influence of these variables could be seen for FRPs. FRPs are superior to MSPs with respect to short-term clinical performance. Especially for MSPs, clinical survival depends on several variables.

  7. Testing statistical self-similarity in the topology of river networks

    USGS Publications Warehouse

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  8. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  9. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  10. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    PubMed

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  11. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  12. Time dependent variation of carrying capacity of prestressed precast beam

    NASA Astrophysics Data System (ADS)

    Le, Tuan D.; Konečný, Petr; Matečková, Pavlína

    2018-04-01

    The article deals with the evaluation of the precast concrete element time dependent carrying capacity. The variation of the resistance is inherited property of laboratory as well as in-situ members. Thus the specification of highest, yet possible, laboratory sample resistance is important with respect to evaluation of laboratory experiments based on the test machine loading capabilities. The ultimate capacity is evaluated through the bending moment resistance of a simply supported prestressed concrete beam. The probabilistic assessment is applied. Scatter of random variables of compressive strength of concrete and effective height of the cross section is considered. Monte Carlo simulation technique is used to investigate the performance of the cross section of the beam with changes of tendons’ positions and compressive strength of concrete.

  13. Risk of dependence associated with health, social support, and lifestyle

    PubMed Central

    Alcañiz, Manuela; Brugulat, Pilar; Guillén, Montserrat; Medina-Bustos, Antonia; Mompart-Penina, Anna; Solé-Auró, Aïda

    2015-01-01

    OBJECTIVE To analyze the prevalence of individuals at risk of dependence and its associated factors. METHODS The study was based on data from the Catalan Health Survey, Spain conducted in 2010 and 2011. Logistic regression models from a random sample of 3,842 individuals aged ≥ 15 years were used to classify individuals according to the state of their personal autonomy. Predictive models were proposed to identify indicators that helped distinguish dependent individuals from those at risk of dependence. Variables on health status, social support, and lifestyles were considered. RESULTS We found that 18.6% of the population presented a risk of dependence, especially after age 65. Compared with this group, individuals who reported dependence (11.0%) had difficulties performing activities of daily living and had to receive support to perform them. Habits such as smoking, excessive alcohol consumption, and being sedentary were associated with a higher probability of dependence, particularly for women. CONCLUSIONS Difficulties in carrying out activities of daily living precede the onset of dependence. Preserving personal autonomy and function without receiving support appear to be a preventive factor. Adopting an active and healthy lifestyle helps reduce the risk of dependence. PMID:26018786

  14. Risk of dependence associated with health, social support, and lifestyle.

    PubMed

    Alcañiz, Manuela; Brugulat, Pilar; Guillén, Montserrat; Medina-Bustos, Antonia; Mompart-Penina, Anna; Solé-Auró, Aïda

    2015-01-01

    OBJECTIVE To analyze the prevalence of individuals at risk of dependence and its associated factors. METHODS The study was based on data from the Catalan Health Survey, Spain conducted in 2010 and 2011. Logistic regression models from a random sample of 3,842 individuals aged ≥ 15 years were used to classify individuals according to the state of their personal autonomy. Predictive models were proposed to identify indicators that helped distinguish dependent individuals from those at risk of dependence. Variables on health status, social support, and lifestyles were considered. RESULTS We found that 18.6% of the population presented a risk of dependence, especially after age 65. Compared with this group, individuals who reported dependence (11.0%) had difficulties performing activities of daily living and had to receive support to perform them. Habits such as smoking, excessive alcohol consumption, and being sedentary were associated with a higher probability of dependence, particularly for women. CONCLUSIONS Difficulties in carrying out activities of daily living precede the onset of dependence. Preserving personal autonomy and function without receiving support appear to be a preventive factor. Adopting an active and healthy lifestyle helps reduce the risk of dependence.

  15. Exercise training improves heart rate variability after methamphetamine dependency.

    PubMed

    Dolezal, Brett Andrew; Chudzynski, Joy; Dickerson, Daniel; Mooney, Larissa; Rawson, Richard A; Garfinkel, Alan; Cooper, Christopher B

    2014-06-01

    Heart rate variability (HRV) reflects a healthy autonomic nervous system and is increased with physical training. Methamphetamine dependence (MD) causes autonomic dysfunction and diminished HRV. We compared recently abstinent methamphetamine-dependent participants with age-matched, drug-free controls (DF) and also investigated whether HRV can be improved with exercise training in the methamphetamine-dependent participants. In 50 participants (MD = 28; DF = 22), resting heart rate (HR; R-R intervals) was recorded over 5 min while seated using a monitor affixed to a chest strap. Previously reported time domain (SDNN, RMSSD, pNN50) and frequency domain (LFnu, HFnu, LF/HF) parameters of HRV were calculated with customized software. MD were randomized to thrice-weekly exercise training (ME = 14) or equal attention without training (MC = 14) over 8 wk. Groups were compared using paired and unpaired t-tests. Statistical significance was set at P ≤ 0.05. Participant characteristics were matched between groups (mean ± SD): age = 33 ± 6 yr; body mass = 82.7 ± 12 kg, body mass index = 26.8 ± 4.1 kg·min. Compared with DF, the MD group had significantly higher resting HR (P < 0.05), LFnu, and LF/HF (P < 0.001) as well as lower SDNN, RMSSD, pNN50, and HFnu (all P < 0.001). At randomization, HRV indices were similar between ME and MC groups. However, after training, the ME group significantly (all P < 0.001) increased SDNN (+14.7 ± 2.0 ms, +34%), RMSSD (+19.6 ± 4.2 ms, +63%), pNN50 (+22.6% ± 2.7%, +173%), HFnu (+14.2 ± 1.9, +60%), and decreased HR (-5.2 ± 1.1 bpm, -7%), LFnu (-9.6 ± 1.5, -16%), and LF/HF (-0.7 ± 0.3, -19%). These measures did not change from baseline in the MC group. HRV, based on several conventional indices, was diminished in recently abstinent, methamphetamine-dependent individuals. Moreover, physical training yielded a marked increase in HRV, representing increased vagal modulation or improved autonomic balance.

  16. Compliance-Effect Correlation Bias in Instrumental Variables Estimators

    ERIC Educational Resources Information Center

    Reardon, Sean F.

    2010-01-01

    Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…

  17. Identifying and Analyzing Uncertainty Structures in the TRMM Microwave Imager Precipitation Product over Tropical Ocean Basins

    NASA Technical Reports Server (NTRS)

    Liu, Jianbo; Kummerow, Christian D.; Elsaesser, Gregory S.

    2016-01-01

    Despite continuous improvements in microwave sensors and retrieval algorithms, our understanding of precipitation uncertainty is quite limited, due primarily to inconsistent findings in studies that compare satellite estimates to in situ observations over different parts of the world. This study seeks to characterize the temporal and spatial properties of uncertainty in the Tropical Rainfall Measuring Mission Microwave Imager surface rainfall product over tropical ocean basins. Two uncertainty analysis frameworks are introduced to qualitatively evaluate the properties of uncertainty under a hierarchy of spatiotemporal data resolutions. The first framework (i.e. 'climate method') demonstrates that, apart from random errors and regionally dependent biases, a large component of the overall precipitation uncertainty is manifested in cyclical patterns that are closely related to large-scale atmospheric modes of variability. By estimating the magnitudes of major uncertainty sources independently, the climate method is able to explain 45-88% of the monthly uncertainty variability. The percentage is largely resolution dependent (with the lowest percentage explained associated with a 1 deg x 1 deg spatial/1 month temporal resolution, and highest associated with a 3 deg x 3 deg spatial/3 month temporal resolution). The second framework (i.e. 'weather method') explains regional mean precipitation uncertainty as a summation of uncertainties associated with individual precipitation systems. By further assuming that self-similar recurring precipitation systems yield qualitatively comparable precipitation uncertainties, the weather method can consistently resolve about 50 % of the daily uncertainty variability, with only limited dependence on the regions of interest.

  18. The big picture: does colonoscopy work?

    PubMed

    Hewett, David G; Rex, Douglas K

    2015-04-01

    Colonoscopy for average-risk colorectal cancer screening has transformed the practice of gastrointestinal medicine in the United States. However, although the dominant screening strategy, its use is not supported by randomized controlled trials. Observational data do support a protective effect of colonoscopy and polypectomy on colorectal cancer incidence and mortality, but the level of protection in the proximal colon is variable and operator-dependent. Colonoscopy by high-level detectors remains highly effective, and ongoing quality improvement initiatives should consider regulatory factors that motivate changes in physician behavior. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  20. Predictors of Start of Different Antidepressants in Patient Charts among Patients with Depression

    PubMed Central

    Kim, Hyungjin Myra; Zivin, Kara; Choe, Hae Mi; Stano, Clare M.; Ganoczy, Dara; Walters, Heather; Valenstein, Marcia

    2016-01-01

    Background In usual psychiatric care, antidepressant treatments are selected based on physician and patient preferences rather than being randomly allocated, resulting in spurious associations between these treatments and outcome studies. Objectives To identify factors recorded in electronic medical chart progress notes predictive of antidepressant selection among patients who had received a depression diagnosis. Methods This retrospective study sample consisted of 556 randomly selected Veterans Health Administration (VHA) patients diagnosed with depression from April 1, 1999 to September 30, 2004, stratified by the antidepressant agent, geographic region, gender, and year of depression cohort entry. Predictors were obtained from administrative data, and additional variables were abstracted from electronic medical chart notes in the year prior to the start of the antidepressant in five categories: clinical symptoms and diagnoses, substance use, life stressors, behavioral/ideation measures (e.g., suicide attempts), and treatments received. Multinomial logistic regression analysis was used to assess the predictors associated with different antidepressant prescribing, and adjusted relative risk ratios (RRR) are reported. Results Of the administrative data-based variables, gender, age, illicit drug abuse or dependence, and number of psychiatric medications in prior year were significantly associated with antidepressant selection. After adjusting for administrative data-based variables, sleep problems (RRR = 2.47) or marital issues (RRR = 2.64) identified in the charts were significantly associated with prescribing mirtazapine rather than sertraline; however, no other chart-based variables showed a significant association or an association with a large magnitude. Conclusion Some chart data-based variables were predictive of antidepressant selection, but we neither found many nor found them highly predictive of antidepressant selection in patients treated for depression. PMID:25943003

  1. Quantum interference magnetoconductance of polycrystalline germanium films in the variable-range hopping regime

    NASA Astrophysics Data System (ADS)

    Li, Zhaoguo; Peng, Liping; Zhang, Jicheng; Li, Jia; Zeng, Yong; Zhan, Zhiqiang; Wu, Weidong

    2018-06-01

    Direct evidence of quantum interference magnetotransport in polycrystalline germanium films in the variable-range hopping (VRH) regime is reported. The temperature dependence of the conductivity of germanium films fulfilled the Mott VRH mechanism with the form of ? in the low-temperature regime (?). For the magnetotransport behaviour of our germanium films in the VRH regime, a crossover, from negative magnetoconductance at the low-field to positive magnetoconductance at the high-field, is observed while the zero-field conductivity is higher than the critical value (?). In the regime of ?, the magnetoconductance is positive and quadratic in the field for some germanium films. These features are in agreement with the VRH magnetotransport theory based on the quantum interference effect among random paths in the hopping process.

  2. Anderson localization for radial tree-like random quantum graphs

    NASA Astrophysics Data System (ADS)

    Hislop, Peter D.; Post, Olaf

    We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.

  3. Analysis of threats to research validity introduced by audio recording clinic visits: Selection bias, Hawthorne effect, both, or neither?

    PubMed Central

    Henry, Stephen G.; Jerant, Anthony; Iosif, Ana-Maria; Feldman, Mitchell D.; Cipri, Camille; Kravitz, Richard L.

    2015-01-01

    Objective To identify factors associated with participant consent to record visits; to estimate effects of recording on patient-clinician interactions Methods Secondary analysis of data from a randomized trial studying communication about depression; participants were asked for optional consent to audio record study visits. Multiple logistic regression was used to model likelihood of patient and clinician consent. Multivariable regression and propensity score analyses were used to estimate effects of audio recording on 6 dependent variables: discussion of depressive symptoms, preventive health, and depression diagnosis; depression treatment recommendations; visit length; visit difficulty. Results Of 867 visits involving 135 primary care clinicians, 39% were recorded. For clinicians, only working in academic settings (P=0.003) and having worked longer at their current practice (P=0.02) were associated with increased likelihood of consent. For patients, white race (P=0.002) and diabetes (P=0.03) were associated with increased likelihood of consent. Neither multivariable regression nor propensity score analyses revealed any significant effects of recording on the variables examined. Conclusion Few clinician or patient characteristics were significantly associated with consent. Audio recording had no significant effect on any dependent variables. Practice Implications Benefits of recording clinic visits likely outweigh the risks of bias in this setting. PMID:25837372

  4. Older People's Perceptions of Pedestrian Friendliness and Traffic Safety: An Experiment Using Computer-Simulated Walking Environments.

    PubMed

    Kahlert, Daniela; Schlicht, Wolfgang

    2015-08-21

    Traffic safety and pedestrian friendliness are considered to be important conditions for older people's motivation to walk through their environment. This study uses an experimental study design with computer-simulated living environments to investigate the effect of micro-scale environmental factors (parking spaces and green verges with trees) on older people's perceptions of both motivational antecedents (dependent variables). Seventy-four consecutively recruited older people were randomly assigned watching one of two scenarios (independent variable) on a computer screen. The scenarios simulated a stroll on a sidewalk, as it is 'typical' for a German city. In version 'A,' the subjects take a fictive walk on a sidewalk where a number of cars are parked partially on it. In version 'B', cars are in parking spaces separated from the sidewalk by grass verges and trees. Subjects assessed their impressions of both dependent variables. A multivariate analysis of covariance showed that subjects' ratings on perceived traffic safety and pedestrian friendliness were higher for Version 'B' compared to version 'A'. Cohen's d indicates medium (d = 0.73) and large (d = 1.23) effect sizes for traffic safety and pedestrian friendliness, respectively. The study suggests that elements of the built environment might affect motivational antecedents of older people's walking behavior.

  5. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    PubMed

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.

  7. Effectiveness of the Epley’s maneuver performed in primary care to treat posterior canal benign paroxysmal positional vertigo: study protocol for a randomized controlled trial

    PubMed Central

    2014-01-01

    Background Vertigo is a common medical condition with a broad spectrum of diagnoses which requires an integrated approach to patients through a structured clinical interview and physical examination. The main cause of vertigo in primary care is benign paroxysmal positional vertigo (BPPV), which should be confirmed by a positive D-H positional test and treated with repositioning maneuvers. The objective of this study is to evaluate the effectiveness of Epley’s maneuver performed by general practitioners (GPs) in the treatment of BPPV. Methods/Design This study is a randomized clinical trial conducted in the primary care setting. The study’s scope will include two urban primary care centers which provide care for approximately 49,400 patients. All patients attending these two primary care centers, who are newly diagnosed with benign paroxysmal positional vertigo, will be invited to participate in the study and will be randomly assigned either to the treatment group (Epley’s maneuver) or to the control group (a sham maneuver). Both groups will receive betahistine. Outcome variables will be: response to the D-H test, patients’ report on presence or absence of vertigo during the previous week (dichotomous variable: yes/no), intensity of vertigo symptoms on a Likert-type scale in the previous week, total score on the Dizziness Handicap Inventory (DHI) and quantity of betahistine taken. We will use descriptive statistics of all variables collected. Groups will be compared using the intent-to-treat approach and either parametric or nonparametric tests, depending on the nature and distribution of the variables. Chi-square test or Fisher’s exact test will be conducted to compare categorical measures and Student’s t-test or Mann–Whitney U-test will be used for intergroup comparison variables. Discussion Positive results from our study will highlight that treatment of benign paroxysmal positional vertigo can be performed by trained general practitioners (GPs) and, therefore, its widespread practice may contribute to improve the quality of life of BPPV patients. Trial registration ClinicalTrials.gov Identifier: NCT01969513. PMID:24886338

  8. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  9. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    ERIC Educational Resources Information Center

    Bancroft, Stacie L.; Bourret, Jason C.

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…

  10. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  11. Random Variables: Simulations and Surprising Connections.

    ERIC Educational Resources Information Center

    Quinn, Robert J.; Tomlinson, Stephen

    1999-01-01

    Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)

  12. Binomial leap methods for simulating stochastic chemical kinetics.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2004-12-01

    This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.

  13. Outcome evaluation results of school-based cybersafety promotion and cyberbullying prevention intervention for middle school students.

    PubMed

    Roberto, Anthony J; Eden, Jen; Savage, Matthew W; Ramos-Salazar, Leslie; Deiss, Douglas M

    2014-01-01

    Guided largely by the Extended Parallel Process Model, the Arizona Attorney General's Social Networking Safety Promotion and Cyberbullying Prevention presentation attempts to shape, change, and reinforce middle school students' perceptions, attitudes, and intentions related to these important social issues. This study evaluated the short-term effects of this presentation in a field experiment using a posttest-only control-group design with random assignment to conditions. A total of 425 sixth, seventh, and eighth graders at a public middle school in a large Southwestern city participated in this study. Results reveal several interesting trends across grade levels regarding cyberbullying perpetration and victimization, and concerning access to various communication technologies. The intervention had the hypothesized main effect on eight of the dependent variables under investigation. Examination of condition by grade interaction effects offered further support for an additional four hypotheses (i.e., the intervention positively affected or reversed a negative trend on four dependent variables in at least one grade). Ideas and implications for future social networking safety promotion and cyberbullying prevention interventions are discussed.

  14. Correlations and path analysis among agronomic and technological traits of upland cotton.

    PubMed

    Farias, F J C; Carvalho, L P; Silva Filho, J L; Teodoro, P E

    2016-08-12

    To date, path analysis has been used with the aim of breeding different cultures. However, for cotton, there have been few studies using this analysis, and all of these have used fiber productivity as the primary dependent variable. Therefore, the aim of the present study was to identify agronomic and technological properties that can be used as criteria for direct and indirect phenotypes in selecting cotton genotypes with better fibers. We evaluated 16 upland cotton genotypes in eight trials conducted during the harvest 2008/2009 in the State of Mato Grosso, using a randomized block design with four replicates. The evaluated traits were: plant height, average boll weight, percentage of fiber, cotton seed yield, fiber length, uniformity of fiber, short fiber index, fiber strength, elongation, maturity of the fibers, micronaire, reflectance, and the degree of yellowing. Phenotypic correlations between the traits and cotton fiber yield (main dependent variable) were unfolded in direct and indirect effects through path analysis. Fiber strength, uniformity of fiber, and reflectance were found to influence fiber length, and therefore, these traits are recommended for both direct and indirect selection of cotton genotypes.

  15. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  16. Investigating Factorial Invariance of Latent Variables Across Populations When Manifest Variables Are Missing Completely

    PubMed Central

    Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.

    2013-01-01

    Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738

  17. Interreality for the management and training of psychological stress: study protocol for a randomized controlled trial

    PubMed Central

    2013-01-01

    Background Psychological stress occurs when an individual perceives that environmental demands tax or exceed his or her adaptive capacity. Its association with severe health and emotional diseases, points out the necessity to find new efficient strategies to treat it. Moreover, psychological stress is a very personal problem and requires training focused on the specific needs of individuals. To overcome the above limitations, the INTERSTRESS project suggests the adoption of a new paradigm for e-health - Interreality - that integrates contextualized assessment and treatment within a hybrid environment, bridging the physical and the virtual worlds. According to this premise, the aim of this study is to investigate the advantages of using advanced technologies, in combination with cognitive behavioral therapy (CBT), based on a protocol for reducing psychological stress. Methods/Design The study is designed as a randomized controlled trial. It includes three groups of approximately 50 subjects each who suffer from psychological stress: (1) the experimental group, (2) the control group, (3) the waiting list group. Participants included in the experimental group will receive a treatment based on cognitive behavioral techniques combined with virtual reality, biofeedback and mobile phone, while the control group will receive traditional stress management CBT-based training, without the use of new technologies. The wait-list group will be reassessed and compared with the two other groups five weeks after the initial evaluation. After the reassessment, the wait-list patients will randomly receive one of the two other treatments. Psychometric and physiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as the qualitative dependent variable. Discussion What we would like to show with the present trial is that bridging virtual experiences, used to learn coping skills and emotional regulation, with real experiences using advanced technologies (virtual reality, advanced sensors and smartphones) is a feasible way to address actual limitations of existing protocols for psychological stress. Trial registration http://clinicaltrials.gov/ct2/show/NCT01683617 PMID:23806013

  18. Evaluation of Visual Acuity Measurements after Autorefraction versus Manual Refraction in Eyes with and without Diabetic Macular Edema

    PubMed Central

    Sun, Jennifer K.; Qin, Haijing; Aiello, Lloyd Paul; Melia, Michele; Beck, Roy W.; Andreoli, Christopher M.; Edwards, Paul A.; Glassman, Adam R.; Pavlica, Michael R.

    2012-01-01

    Objective To compare visual acuity (VA) scores after autorefraction versus research protocol manual refraction in eyes of patients with diabetes and a wide range of VA. Methods Electronic Early Treatment Diabetic Retinopathy Study (E-ETDRS) VA Test© letter score (EVA) was measured after autorefraction (AR-EVA) and after Diabetic Retinopathy Clinical Research Network (DRCR.net) protocol manual refraction (MR-EVA). Testing order was randomized, study participants and VA examiners were masked to refraction source, and a second EVA utilizing an identical manual refraction (MR-EVAsupl) was performed to determine test-retest variability. Results In 878 eyes of 456 study participants, median MR-EVA was 74 (Snellen equivalent approximately 20/32). Spherical equivalent was often similar for manual and autorefraction (median difference: 0.00, 5th and 95th percentiles −1.75 to +1.13 Diopters). However, on average, MR-EVA results were slightly better than AR-EVA results across the entire VA range. Furthermore, variability between AR-EVA and MR-EVA was substantially greater than the test-retest variability of MR-EVA (P<0.001). Variability of differences was highly dependent on autorefractor model. Conclusions Across a wide range of VA at multiple sites using a variety of autorefractors, VA measurements tend to be worse with autorefraction than manual refraction. Differences between individual autorefractor models were identified. However, even among autorefractor models comparing most favorably to manual refraction, VA variability between autorefraction and manual refraction is higher than the test-retest variability of manual refraction. The results suggest that with current instruments, autorefraction is not an acceptable substitute for manual refraction for most clinical trials with primary outcomes dependent on best-corrected VA. PMID:22159173

  19. Nonverbal behavior correlated with the shaped verbal behavior of children

    PubMed Central

    Catania, A. Charles; Lowe, C. Fergus; Horne, Pauline

    1990-01-01

    Children under 6 years old pressed on response windows behind which stimuli appeared (star or tree). Presses occasionally lit lamps arranged in a column; a present was delivered when all lamps were lit. A random-ratio schedule in the presence of star alternated with a random-interval schedule in the presence of tree. These contingencies usually did not produce respective high and low response rates in the presence of star and tree, but the shaping of verbal behavior (e.g., “press a lot without stopping” or “press and wait”) was sometimes accompanied by corresponding changes in response rate. Verbal shaping was accomplished between schedule components during verbal interactions between the child and a hand-puppet, Garfield the Cat, and used social consequences such as enthusiastic reactions to what the child had said as well as concrete consequences such as delivery of extra presents. Variables that may constrain the shaping of verbal behavior in children seem to include the vocabulary available to the child and the functional properties of that vocabulary; the correlation between rates of pressing and what the child says about them may depend upon such variables. ImagesFig. 2 PMID:22477603

  20. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    NASA Astrophysics Data System (ADS)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  1. The use of random forests in modelling short-term air pollution effects based on traffic and meteorological conditions: A case study in Wrocław.

    PubMed

    Kamińska, Joanna A

    2018-07-01

    Random forests, an advanced data mining method, are used here to model the regression relationships between concentrations of the pollutants NO 2 , NO x and PM 2.5 , and nine variables describing meteorological conditions, temporal conditions and traffic flow. The study was based on hourly values of wind speed, wind direction, temperature, air pressure and relative humidity, temporal variables, and finally traffic flow, in the two years 2015 and 2016. An air quality measurement station was selected on a main road, located a short distance (40 m) from a large intersection equipped with a traffic flow measurement system. Nine different time subsets were defined, based among other things on the climatic conditions in Wrocław. An analysis was made of the fit of models created for those subsets, and of the importance of the predictors. Both the fit and the importance of particular predictors were found to be dependent on season. The best fit was obtained for models created for the six-month warm season (April-September) and for the summer season (June-August). The most important explanatory variable in the models of concentrations of nitrogen oxides was traffic flow, while in the case of PM 2.5 the most important were meteorological conditions, in particular temperature, wind speed and wind direction. Temporal variables (except for month in the case of PM 2.5 ) were found to have no significant effect on the concentrations of the studied pollutants. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. A unified approach for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-09-01

    Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.

  3. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  4. Turbulent Convection and Pulsation Stability of Stars

    NASA Astrophysics Data System (ADS)

    Xiong, Da-run

    2017-10-01

    The controversies about the excitation mechanism for low-temperature variables are reviewed: (1) Most people believe that γ Doradus variables are excited by the so-called convective blocking mechanism. Our researches show that the excitation of γ Doradus has no substantial difference from that of δ Scuti. They are two subgroups of a broader type of δ Stuti-γ Doradus stars: δ Scuti is the p-mode subgroup, while γ Doradus is the g-mode subgroup. (2) Most people believe that the solar and stellar solar-like oscillations are damped by convection, and they are driven by the so-called turbulent random excitation mechanism. Our researches show that convection is not solely a damping mechanism for stellar oscillations, otherwise it is unable to explain the Mira and Mira-like variables. By using our non-local and time-dependent theory of convection, we can reproduce not only the pulsationally unstable strip of δ Scuti and γ Doradus variables, but also the solar-like oscillation features of low-luminosity red giants and the Mira-like oscillation features of high-luminosity red giants.

  5. Isolation of a pH-Sensitive IgNAR Variable Domain from a Yeast-Displayed, Histidine-Doped Master Library.

    PubMed

    Könning, Doreen; Zielonka, Stefan; Sellmann, Carolin; Schröter, Christian; Grzeschik, Julius; Becker, Stefan; Kolmar, Harald

    2016-04-01

    In recent years, engineering of pH-sensitivity into antibodies as well as antibody-derived fragments has become more and more attractive for biomedical and biotechnological applications. Herein, we report the isolation of the first pH-sensitive IgNAR variable domain (vNAR), which was isolated from a yeast-displayed, semi-synthetic master library. This strategy enables the direct identification of pH-dependent binders from a histidine-enriched CDR3 library. Displayed vNAR variants contained two histidine substitutions on average at random positions in their 12-residue CDR3 loop. Upon screening of seven rounds against the proof-of-concept target EpCAM (selection for binding at pH 7.4 and decreased binding at pH 6.0), a single clone was obtained that showed specific and pH-dependent binding as characterized by yeast surface display and biolayer interferometry. Potential applications for such pH-dependent vNAR domains include their employment in tailored affinity chromatography, enabling mild elution protocols. Moreover, utilizing a master library for the isolation of pH-sensitive vNAR variants may be a generic strategy to obtain binding entities with prescribed characteristics for applications in biotechnology, diagnostics, and therapy.

  6. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  8. The competing risks Cox model with auxiliary case covariates under weaker missing-at-random cause of failure.

    PubMed

    Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin

    2017-08-04

    In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.

  9. Impact of a board-game approach on current smokers: a randomized controlled trial

    PubMed Central

    2013-01-01

    Background The main objective of our study was to assess the impact of a board game on smoking status and smoking-related variables in current smokers. To accomplish this objective, we conducted a randomized controlled trial comparing the game group with a psychoeducation group and a waiting-list control group. Methods The following measures were performed at participant inclusion, as well as after a 2-week and a 3-month follow-up period: “Attitudes Towards Smoking Scale” (ATS-18), “Smoking Self-Efficacy Questionnaire” (SEQ-12), “Attitudes Towards Nicotine Replacement Therapy” scale (ANRT-12), number of cigarettes smoked per day, stages of change, quit attempts, and smoking status. Furthermore, participants were assessed for concurrent psychiatric disorders and for the severity of nicotine dependence with the Fagerström Test for Nicotine Dependence (FTND). Results A time × group effect was observed for subscales of the ANRT-12, ATS-18 and SEQ-12, as well as for the number of cigarettes smoked per day. At three months follow-up, compared to the participants allocated to the waiting list group, those on Pick-Klop group were less likely to remain smoker. Outcomes at 3 months were not predicted by gender, age, FTND, stage of change, or psychiatric disorders at inclusion. Conclusions The board game seems to be a good option for smokers. The game led to improvements in variables known to predict quitting in smokers. Furthermore, it increased smoking-cessation rates at 3-months follow-up. The game is also an interesting alternative for smokers in the precontemplation stage. PMID:23327643

  10. Sampling-Based Stochastic Sensitivity Analysis Using Score Functions for RBDO Problems with Correlated Random Variables

    DTIC Science & Technology

    2010-08-01

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables

  11. Evaluating Nicotine Craving, Withdrawal, and Substance Use as Mediators of Smoking Cessation in Cocaine- and Methamphetamine-Dependent Patients

    PubMed Central

    Lewis, Daniel F.; Winhusen, Theresa

    2016-01-01

    Abstract Introduction: Smoking is highly prevalent in substance dependence, but smoking-cessation treatment (SCT) is more challenging in this population. To increase the success of smoking cessation services, it is important to understand potential therapeutic targets like nicotine craving that have meaningful but highly variable relationships with smoking outcomes. This study characterized the presence, magnitude, and specificity of nicotine craving as a mediator of the relationship between SCT and smoking abstinence in the context of stimulant-dependence treatment. Methods: This study was a secondary analysis of a randomized, 10-week trial conducted at 12 outpatient SUD treatment programs. Adults with cocaine and/or methamphetamine dependence ( N = 538) were randomized to SUD treatment as usual (TAU) or TAU+SCT. Participants reported nicotine craving, nicotine withdrawal symptoms, and substance use in the week following a uniform quit attempt of the TAU+SCT group, and self-reported smoking 7-day point prevalence abstinence (verified by carbon monoxide) at end-of-treatment. Results: Bootstrapped regression models indicated that, as expected, nicotine craving following a quit attempt mediated the relationship between SCT and end-of-treatment smoking point prevalence abstinence (mediation effect = 0.09, 95% CI = 0.04% to 0.14%, P < .05, 14% of total effect). Nicotine withdrawal symptoms and substance use were not significant mediators ( P s > .05, <1% of total effect). This pattern held for separate examinations of cocaine and methamphetamine dependence. Conclusions: Nicotine craving accounts for a small but meaningful portion of the relationship between smoking-cessation treatment and smoking abstinence during SUD treatment. Nicotine craving following a quit attempt may be a useful therapeutic target for increasing the effectiveness of smoking-cessation treatment in substance dependence. PMID:26048168

  12. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  13. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  14. Computation of restoration of ligand response in the random kinetics of a prostate cancer cell signaling pathway.

    PubMed

    Dana, Saswati; Nakakuki, Takashi; Hatakeyama, Mariko; Kimura, Shuhei; Raha, Soumyendu

    2011-01-01

    Mutation and/or dysfunction of signaling proteins in the mitogen activated protein kinase (MAPK) signal transduction pathway are frequently observed in various kinds of human cancer. Consistent with this fact, in the present study, we experimentally observe that the epidermal growth factor (EGF) induced activation profile of MAP kinase signaling is not straightforward dose-dependent in the PC3 prostate cancer cells. To find out what parameters and reactions in the pathway are involved in this departure from the normal dose-dependency, a model-based pathway analysis is performed. The pathway is mathematically modeled with 28 rate equations yielding those many ordinary differential equations (ODE) with kinetic rate constants that have been reported to take random values in the existing literature. This has led to us treating the ODE model of the pathways kinetics as a random differential equations (RDE) system in which the parameters are random variables. We show that our RDE model captures the uncertainty in the kinetic rate constants as seen in the behavior of the experimental data and more importantly, upon simulation, exhibits the abnormal EGF dose-dependency of the activation profile of MAP kinase signaling in PC3 prostate cancer cells. The most likely set of values of the kinetic rate constants obtained from fitting the RDE model into the experimental data is then used in a direct transcription based dynamic optimization method for computing the changes needed in these kinetic rate constant values for the restoration of the normal EGF dose response. The last computation identifies the parameters, i.e., the kinetic rate constants in the RDE model, that are the most sensitive to the change in the EGF dose response behavior in the PC3 prostate cancer cells. The reactions in which these most sensitive parameters participate emerge as candidate drug targets on the signaling pathway. 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Forms of work organization and associations with shoulder disorders: Results from a French working population.

    PubMed

    Bodin, Julie; Garlantézec, Ronan; Costet, Nathalie; Descatha, Alexis; Fouquet, Natacha; Caroly, Sandrine; Roquelaure, Yves

    2017-03-01

    The aim of this study was to identify forms of work organization in a French region and to study associations with the occurrence of symptomatic and clinically diagnosed shoulder disorders in workers. Workers were randomly included in this cross-sectional study from 2002 to 2005. Sixteen organizational variables were assessed by a self-administered questionnaire: i.e. shift work, job rotation, repetitiveness of tasks, paced work/automatic rate, work pace dependent on quantified targets, permanent controls or surveillance, colleagues' work and customer demand, and eight variables measuring decision latitude. Five forms of work organization were identified using hierarchical cluster analysis (HCA) of variables and HCA of workers: low decision latitude with pace constraints, medium decision latitude with pace constraints, low decision latitude with low pace constraints, high decision latitude with pace constraints and high decision latitude with low pace constraints. There were significant associations between forms of work organization and symptomatic and clinically-diagnosed shoulder disorders. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Connecting the dots between math and reality: A study of critical thinking in high school physics

    NASA Astrophysics Data System (ADS)

    Loper, Timothy K.

    The purpose of this mixed method study was to discover whether training in understanding relationships between variables would help students read and interpret equations for the purposes of problem solving in physics. Twenty students from two physics classes at a private Catholic high school participated in a one group pretest-posttest unit with the conceptually based mathematical intervention being the independent variable, and the test results being the dependent variable for the quantitative portion of the study. A random sample of students was interviewed pre and post intervention for the qualitative portion of the study to determine both how their understanding of equations changed and how their approach to the problems changed. The paired-sample t test showed a significant improvement on the Physics Critical Thinking test at the p<.01 alpha level; furthermore, the interview data indicated the students displayed a deeper understanding of equations and their purpose as opposed to the superficial understanding they had before the intervention.

  17. Degradation modeling of mid-power white-light LEDs by using Wiener process.

    PubMed

    Huang, Jianlin; Golubović, Dušan S; Koh, Sau; Yang, Daoguo; Li, Xiupeng; Fan, Xuejun; Zhang, G Q

    2015-07-27

    The IES standard TM-21-11 provides a guideline for lifetime prediction of LED devices. As it uses average normalized lumen maintenance data and performs non-linear regression for lifetime modeling, it cannot capture dynamic and random variation of the degradation process of LED devices. In addition, this method cannot capture the failure distribution, although it is much more relevant in reliability analysis. Furthermore, the TM-21-11 only considers lumen maintenance for lifetime prediction. Color shift, as another important performance characteristic of LED devices, may also render significant degradation during service life, even though the lumen maintenance has not reached the critical threshold. In this study, a modified Wiener process has been employed for the modeling of the degradation of LED devices. By using this method, dynamic and random variations, as well as the non-linear degradation behavior of LED devices, can be easily accounted for. With a mild assumption, the parameter estimation accuracy has been improved by including more information into the likelihood function while neglecting the dependency between the random variables. As a consequence, the mean time to failure (MTTF) has been obtained and shows comparable result with IES TM-21-11 predictions, indicating the feasibility of the proposed method. Finally, the cumulative failure distribution was presented corresponding to different combinations of lumen maintenance and color shift. The results demonstrate that a joint failure distribution of LED devices could be modeled by simply considering their lumen maintenance and color shift as two independent variables.

  18. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  19. Normal aging reduces motor synergies in manual pointing.

    PubMed

    Verrel, Julius; Lövdén, Martin; Lindenberger, Ulman

    2012-01-01

    Depending upon its organization, movement variability may reflect poor or flexible control of a motor task. We studied adult age-related differences in the structure of postural variability in manual pointing using the uncontrolled manifold (UCM) method. Participants from 2 age groups (younger: 20-30 years; older: 70-80 years; 12 subjects per group) completed a total of 120 pointing trials to 2 different targets presented according to 3 schedules: blocked, alternating, and random. The age groups were similar with respect to basic kinematic variables, end point precision, as well as the accuracy of the biomechanical forward model of the arm. Following the uncontrolled manifold approach, goal-equivalent and nongoal-equivalent components of postural variability (goal-equivalent variability [GEV] and nongoal-equivalent variability [NGEV]) were determined for 5 time points of the movements (start, 10%, 50%, 90%, and end) and used to define a synergy index reflecting the flexibility/stability aspect of motor synergies. Toward the end of the movement, younger adults showed higher synergy indexes than older adults. Effects of target schedule were not reliable. We conclude that normal aging alters the organization of common multidegree-of-freedom movements, with older adults making less flexible use of motor abundance than younger adults. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Variability and predictors of negative mood intensity in patients with borderline personality disorder and recurrent suicidal behavior: multilevel analyses applied to experience sampling methodology.

    PubMed

    Nisenbaum, Rosane; Links, Paul S; Eynan, Rahel; Heisel, Marnin J

    2010-05-01

    Variability in mood swings is a characteristic of borderline personality disorder (BPD) and is associated with suicidal behavior. This study investigated patterns of mood variability and whether such patterns could be predicted from demographic and suicide-related psychological risk factors. Eighty-two adults with BPD and histories of recurrent suicidal behavior were recruited from 3 outpatient psychiatric programs in Canada. Experience sampling methodology (ESM) was used to assess negative mood intensity ratings on a visual analogue scale, 6 random times daily, for 21 days. Three-level models estimated variability between times (52.8%), days (22.2%), and patients (25.1%) and supported a quadratic pattern of daily mood variability. Depression scores predicted variability between patients' initial rating of the day. Average daily mood patterns depended on levels of hopelessness, suicide ideation, and sexual abuse history. Patients reporting moderate to severe sexual abuse and elevated suicide ideation were characterized by worsening moods from early morning up through evening, with little or no relief; patients reporting mild sexual abuse and low suicide ideation reported improved mood throughout the day. These patterns, if replicated in larger ESM studies, may potentially assist the clinician in determining which patients require close monitoring.

  1. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  2. Effects of induced social roles on the High School Personality Questionnaire.

    PubMed

    Merydith, S P; Wallbrown, F H

    1995-08-01

    A one-way multivariate analysis of variance design with a control group (regular directions) and three treatment groups using induced social roles (Faking Good, Teacher, and Ideal Teacher) as independent variables and the High School Personality Questionnaire primary scores as dependent variables was used. Subjects were 384 male high school students from Grades 9 through 12. Within each classroom, students were randomly assigned to the four groups noted above. A broad pattern of differences in scores on primary and secondary personality dimensions were obtained. Significant differences between the control (standard directions) and the Faking Good, Teacher, and Ideal Teacher roles were obtained on three secondary and most of the primary personality dimensions. In several cases the ideal social role and neutral social role showed distinct differences from the more pervasive favorable impression role.

  3. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  4. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  5. Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2D2) bias correction

    NASA Astrophysics Data System (ADS)

    Vrac, Mathieu

    2018-06-01

    Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.

  6. Temperature dependent characteristics of the random telegraph noise on contact resistive random access memory

    NASA Astrophysics Data System (ADS)

    Chang, Liang-Shun; Lin, Chrong Jung; King, Ya-Chin

    2014-01-01

    The temperature dependent characteristics of the random telegraphic noise (RTN) on contact resistive random access memory (CRRAM) are studied in this work. In addition to the bi-level switching, the occurrences of the middle states in the RTN signal are investigated. Based on the unique its temperature dependent characteristics, a new temperature sensing scheme is proposed for applications in ultra-low power sensor modules.

  7. A Preliminary Investigation of a Randomized Dependent Group Contingency for Hallway Transitions

    ERIC Educational Resources Information Center

    Deshais, Meghan A.; Fisher, Alyssa B.; Kahng, SungWoo

    2018-01-01

    We conducted a preliminary investigation of a randomized dependent group contingency to decrease disruptive behavior during hallway transitions. Two first-graders, identified by their classroom teacher, participated in this study. A multiple baseline across transitions was used to evaluate the effects of the randomized dependent group contingency…

  8. Geometrical effects on the electron residence time in semiconductor nano-particles.

    PubMed

    Koochi, Hakimeh; Ebrahimi, Fatemeh

    2014-09-07

    We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ(r) in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r(2) model) or through the whole particle (r(3) model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW) simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ(r). It has been observed that by increasing the coordination number n, the average value of electron residence time, τ̅(r) rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ̅(r) is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ̅(r). Our simulations indicate that for volume distribution of traps, τ̅(r) scales as d(2). For a surface distribution of traps τ(r) increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.

  9. Late Reduction of Cocaine Cravings in a Randomized, Double-Blind Trial of Aripiprazole vs Perphenazine in Schizophrenia and Comorbid Cocaine Dependence.

    PubMed

    Beresford, Thomas; Buchanan, Jennifer; Thumm, Elizabeth Brie; Emrick, Chad; Weitzenkamp, David; Ronan, Patrick J

    2017-12-01

    Co-occurring schizophrenia spectrum disorder and International Statistical Classification of Diseases, 10th Revision cocaine dependence present a particularly destructive constellation that is often difficult to treat. Both conditions raise dopamine transmission effects in the brain. Traditional neuroleptics block dopamine receptors, whereas aripiprazole modulates dopamine activity as an agonist/antagonist. We tested whether dopamine modulation is superior to dopamine blocking in dual-diagnosis patients. In a randomized, double-blind, comparison design, cocaine-dependent schizophrenic subjects actively using cocaine received either aripiprazole or perphenazine in an 8-week trial. Primary outcome targeted cocaine-free urine sample proportions, whereas cocaine craving scores were a secondary variable. Subjects (N = 44) randomized (n = 22 per group) did not differ at baseline. The proportion of cocaine-free urine samples did not differ by medication group. Contrasting weeks 3 to 5 vs 6 to 8 revealed significant late reductions in craving with aripiprazole. On the respective 5-point subscales, craving intensity decreased by 1.53 ± 0.43 (P < 0.0005) points, craving frequency by 1.4 ± 0.40 (P > 0.0004) points, and craving duration by 1.76 ± 0.44 (P > 0.0001) points. A drug effect of aripiprazole on craving items appeared at week 6 of treatment, on average, and was not seen before that length of drug exposure. The data suggest that dopamine modulation reduces cocaine cravings but requires an acclimation period. To understand the mechanism of action better, a trial of depot aripiprazole may be useful. Clinically, a reduction in craving potentially offers a clearer focus for ongoing behavioral treatment. It may also offer a longer-term treatment effect with respect to the severity of relapse.

  10. Hierarchical Bayesian spatial models for alcohol availability, drug "hot spots" and violent crime.

    PubMed

    Zhu, Li; Gorman, Dennis M; Horel, Scott

    2006-12-07

    Ecologic studies have shown a relationship between alcohol outlet densities, illicit drug use and violence. The present study examined this relationship in the City of Houston, Texas, using a sample of 439 census tracts. Neighborhood sociostructural covariates, alcohol outlet density, drug crime density and violent crime data were collected for the year 2000, and analyzed using hierarchical Bayesian models. Model selection was accomplished by applying the Deviance Information Criterion. The counts of violent crime in each census tract were modelled as having a conditional Poisson distribution. Four neighbourhood explanatory variables were identified using principal component analysis. The best fitted model was selected as the one considering both unstructured and spatial dependence random effects. The results showed that drug-law violation explained a greater amount of variance in violent crime rates than alcohol outlet densities. The relative risk for drug-law violation was 2.49 and that for alcohol outlet density was 1.16. Of the neighbourhood sociostructural covariates, males of age 15 to 24 showed an effect on violence, with a 16% decrease in relative risk for each increase the size of its standard deviation. Both unstructured heterogeneity random effect and spatial dependence need to be included in the model. The analysis presented suggests that activity around illicit drug markets is more strongly associated with violent crime than is alcohol outlet density. Unique among the ecological studies in this field, the present study not only shows the direction and magnitude of impact of neighbourhood sociostructural covariates as well as alcohol and illicit drug activities in a neighbourhood, it also reveals the importance of applying hierarchical Bayesian models in this research field as both spatial dependence and heterogeneity random effects need to be considered simultaneously.

  11. Randomized prospective study of the evolution of renal function depending on the anticalcineurin used.

    PubMed

    Moro, J A; Almenar Bonet, L; Martínez-Dolz, L; Raso, R; Sánchez-Lázaro, I; Agüero, J; Salvador, Antonio

    2008-11-01

    Renal failure is one of the primary medium- to long-term morbidities in heart transplant (HT) recipients. To a great extent, this renal deterioration is associated with calcineurin inhibitors, primarily cyclosporine A (CsA). It has been suggested that tacrolimus provides better renal function in these patients. We assessed the medium-term evolution of renal function depending on the calcineurin inhibitor used after HT. We assessed 40 consecutive HT recipients over one year. Patients were randomized to receive CsA (n = 20) or tacrolimus (n = 20) in combination with mycophenolate mofetil (1 g/12 h) and deflazacort in decreasing dosages. We analyzed demographic variables before HT, creatinine values before and six months after HT and incidence of acute rejection. No demographic, clinical, or analytical differences were observed were between the two groups before HT. Repeated measures analysis of variance of creatinine values showed no significant differences between the two groups (P = .98). Furthermore, no differences were observed in either the incidence of rejection (P = .02) or rejection-free survival (P = .14). There seems to be no difference in efficacy profile and renal tolerability between CsA and tacrolimus therapy during the first months after HT.

  12. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevalier, Michael W., E-mail: Michael.Chevalier@ucsf.edu; El-Samad, Hana, E-mail: Hana.El-Samad@ucsf.edu

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation timesmore » of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.« less

  13. Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions

    PubMed Central

    Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas

    2012-01-01

    We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742

  14. Do little interactions get lost in dark random forests?

    PubMed

    Wright, Marvin N; Ziegler, Andreas; König, Inke R

    2016-03-31

    Random forests have often been claimed to uncover interaction effects. However, if and how interaction effects can be differentiated from marginal effects remains unclear. In extensive simulation studies, we investigate whether random forest variable importance measures capture or detect gene-gene interactions. With capturing interactions, we define the ability to identify a variable that acts through an interaction with another one, while detection is the ability to identify an interaction effect as such. Of the single importance measures, the Gini importance captured interaction effects in most of the simulated scenarios, however, they were masked by marginal effects in other variables. With the permutation importance, the proportion of captured interactions was lower in all cases. Pairwise importance measures performed about equal, with a slight advantage for the joint variable importance method. However, the overall fraction of detected interactions was low. In almost all scenarios the detection fraction in a model with only marginal effects was larger than in a model with an interaction effect only. Random forests are generally capable of capturing gene-gene interactions, but current variable importance measures are unable to detect them as interactions. In most of the cases, interactions are masked by marginal effects and interactions cannot be differentiated from marginal effects. Consequently, caution is warranted when claiming that random forests uncover interactions.

  15. The influence of an uncertain force environment on reshaping trial-to-trial motor variability.

    PubMed

    Izawa, Jun; Yoshioka, Toshinori; Osu, Rieko

    2014-09-10

    Motor memory is updated to generate ideal movements in a novel environment. When the environment changes every trial randomly, how does the brain incorporate this uncertainty into motor memory? To investigate how the brain adapts to an uncertain environment, we considered a reach adaptation protocol where individuals practiced moving in a force field where a noise was injected. After they had adapted, we measured the trial-to-trial variability in the temporal profiles of the produced hand force. We found that the motor variability was significantly magnified by the adaptation to the random force field. Temporal profiles of the motor variance were significantly dissociable between two different types of random force fields experienced. A model-based analysis suggests that the variability is generated by noise in the gains of the internal model. It further suggests that the trial-to-trial motor variability magnified by the adaptation in a random force field is generated by the uncertainty of the internal model formed in the brain as a result of the adaptation.

  16. Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin

    NASA Astrophysics Data System (ADS)

    zhang, L.

    2011-12-01

    Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be studied through the copula theory. As to the parameter estimation, the maximum likelihood estimation (MLE) will be applied. To illustrate the method, the univariate time series model and the dependence structure will be determined and tested using the monthly discharge time series of Cuyahoga River Basin.

  17. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  18. A stochastic model for density-dependent microwave Snow- and Graupel scattering coefficients of the NOAA JCSDA community radiative transfer model

    NASA Astrophysics Data System (ADS)

    Stegmann, Patrick G.; Tang, Guanglin; Yang, Ping; Johnson, Benjamin T.

    2018-05-01

    A structural model is developed for the single-scattering properties of snow and graupel particles with a strongly heterogeneous morphology and an arbitrary variable mass density. This effort is aimed to provide a mechanism to consider particle mass density variation in the microwave scattering coefficients implemented in the Community Radiative Transfer Model (CRTM). The stochastic model applies a bicontinuous random medium algorithm to a simple base shape and uses the Finite-Difference-Time-Domain (FDTD) method to compute the single-scattering properties of the resulting complex morphology.

  19. A model for simulating random atmospheres as a function of latitude, season, and time

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.

    1977-01-01

    An empirical stochastic computer model was developed with the capability of generating random thermodynamic profiles of the atmosphere below an altitude of 99 km which are characteristic of any given season, latitude, and time of day. Samples of temperature, density, and pressure profiles generated by the model are statistically similar to measured profiles in a data base of over 6000 rocket and high-altitude atmospheric soundings; that is, means and standard deviations of modeled profiles and their vertical gradients are in close agreement with data. Model-generated samples can be used for Monte Carlo simulations of aircraft or spacecraft trajectories to predict or account for the effects on a vehicle's performance of atmospheric variability. Other potential uses for the model are in simulating pollutant dispersion patterns, variations in sound propagation, and other phenomena which are dependent on atmospheric properties, and in developing data-reduction software for satellite monitoring systems.

  20. Using an Informative Missing Data Model to Predict the Ability to Assess Recovery of Balance Control after Spaceflight

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Wood, Scott J.; Jain, Varsha

    2008-01-01

    Astronauts show degraded balance control immediately after spaceflight. To assess this change, astronauts' ability to maintain a fixed stance under several challenging stimuli on a movable platform is quantified by "equilibrium" scores (EQs) on a scale of 0 to 100, where 100 represents perfect control (sway angle of 0) and 0 represents data loss where no sway angle is observed because the subject has to be restrained from falling. By comparing post- to pre-flight EQs for actual astronauts vs. controls, we built a classifier for deciding when an astronaut has recovered. Future diagnostic performance depends both on the sampling distribution of the classifier as well as the distribution of its input data. Taking this into consideration, we constructed a predictive ROC by simulation after modeling P(EQ = 0) in terms of a latent EQ-like beta-distributed random variable with random effects.

  1. bicoid mRNA localises to the Drosophila oocyte anterior by random Dynein-mediated transport and anchoring

    PubMed Central

    Trovisco, Vítor; Belaya, Katsiaryna; Nashchekin, Dmitry; Irion, Uwe; Sirinakis, George; Butler, Richard; Lee, Jack J; Gavis, Elizabeth R; St Johnston, Daniel

    2016-01-01

    bicoid mRNA localises to the Drosophila oocyte anterior from stage 9 of oogenesis onwards to provide a local source for Bicoid protein for embryonic patterning. Live imaging at stage 9 reveals that bicoid mRNA particles undergo rapid Dynein-dependent movements near the oocyte anterior, but with no directional bias. Furthermore, bicoid mRNA localises normally in shot2A2, which abolishes the polarised microtubule organisation. FRAP and photo-conversion experiments demonstrate that the RNA is stably anchored at the anterior, independently of microtubules. Thus, bicoid mRNA is localised by random active transport and anterior anchoring. Super-resolution imaging reveals that bicoid mRNA forms 110–120 nm particles with variable RNA content, but constant size. These particles appear to be well-defined structures that package the RNA for transport and anchoring. DOI: http://dx.doi.org/10.7554/eLife.17537.001 PMID:27791980

  2. Treadmill training as an augmentation treatment for Alzheimer's disease: a pilot randomized controlled study.

    PubMed

    Arcoverde, Cynthia; Deslandes, Andrea; Moraes, Helena; Almeida, Cloyra; Araujo, Narahyana Bom de; Vasques, Paulo Eduardo; Silveira, Heitor; Laks, Jerson

    2014-03-01

    To assess the effect of aerobic exercise on the cognition and functional capacity in Alzheimer's disease (AD) patients. Elderly (n=20) with mild dementia (NINCDS-ADRDA/CDR1) were randomly assigned to an exercise group (EG) on a treadmill (30 minutes, twice a week and moderate intensity of 60% VO₂max) and control group (GC) 10 patients. The primary outcome measure was the cognitive function using Cambridge Cognitive Examination (CAMCOG). Specifics instruments were also applied to evaluate executive function, memory, attention and concentration, cognitive flexibility, inhibitory control and functional capacity. After 16 weeks, the EG showed improvement in cognition CAMCOG whereas the CG declined. Compared to the CG, the EG presented significant improvement on the functional capacity. The analysis of the effect size has shown a favorable response to the physical exercise in all dependent variables. Walking on treadmill may be recommended as an augmentation treatment for patients with AD.

  3. A statistical assessment of population trends for data deficient Mexican amphibians

    PubMed Central

    Thessen, Anne E.; Arias-Caballero, Paulina; Ayala-Orozco, Bárbara

    2014-01-01

    Background. Mexico has the world’s fifth largest population of amphibians and the second country with the highest quantity of threatened amphibian species. About 10% of Mexican amphibians lack enough data to be assigned to a risk category by the IUCN, so in this paper we want to test a statistical tool that, in the absence of specific demographic data, can assess a species’ risk of extinction, population trend, and to better understand which variables increase their vulnerability. Recent studies have demonstrated that the risk of species decline depends on extrinsic and intrinsic traits, thus including both of them for assessing extinction might render more accurate assessment of threats. Methods. We harvested data from the Encyclopedia of Life (EOL) and the published literature for Mexican amphibians, and used these data to assess the population trend of some of the Mexican species that have been assigned to the Data Deficient category of the IUCN using Random Forests, a Machine Learning method that gives a prediction of complex processes and identifies the most important variables that account for the predictions. Results. Our results show that most of the data deficient Mexican amphibians that we used have decreasing population trends. We found that Random Forests is a solid way to identify species with decreasing population trends when no demographic data is available. Moreover, we point to the most important variables that make species more vulnerable for extinction. This exercise is a very valuable first step in assigning conservation priorities for poorly known species. PMID:25548736

  4. A statistical assessment of population trends for data deficient Mexican amphibians.

    PubMed

    Quintero, Esther; Thessen, Anne E; Arias-Caballero, Paulina; Ayala-Orozco, Bárbara

    2014-01-01

    Background. Mexico has the world's fifth largest population of amphibians and the second country with the highest quantity of threatened amphibian species. About 10% of Mexican amphibians lack enough data to be assigned to a risk category by the IUCN, so in this paper we want to test a statistical tool that, in the absence of specific demographic data, can assess a species' risk of extinction, population trend, and to better understand which variables increase their vulnerability. Recent studies have demonstrated that the risk of species decline depends on extrinsic and intrinsic traits, thus including both of them for assessing extinction might render more accurate assessment of threats. Methods. We harvested data from the Encyclopedia of Life (EOL) and the published literature for Mexican amphibians, and used these data to assess the population trend of some of the Mexican species that have been assigned to the Data Deficient category of the IUCN using Random Forests, a Machine Learning method that gives a prediction of complex processes and identifies the most important variables that account for the predictions. Results. Our results show that most of the data deficient Mexican amphibians that we used have decreasing population trends. We found that Random Forests is a solid way to identify species with decreasing population trends when no demographic data is available. Moreover, we point to the most important variables that make species more vulnerable for extinction. This exercise is a very valuable first step in assigning conservation priorities for poorly known species.

  5. Moderation analysis with missing data in the predictors.

    PubMed

    Zhang, Qian; Wang, Lijuan

    2017-12-01

    The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. A Random Variable Approach to Nuclear Targeting and Survivability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Undem, Halvor A.

    We demonstrate a common mathematical formalism for analyzing problems in nuclear survivability and targeting. This formalism, beginning with a random variable approach, can be used to interpret past efforts in nuclear-effects analysis, including targeting analysis. It can also be used to analyze new problems brought about by the post Cold War Era, such as the potential effects of yield degradation in a permanently untested nuclear stockpile. In particular, we illustrate the formalism through four natural case studies or illustrative problems, linking these to actual past data, modeling, and simulation, and suggesting future uses. In the first problem, we illustrate themore » case of a deterministically modeled weapon used against a deterministically responding target. Classic "Cookie Cutter" damage functions result. In the second problem, we illustrate, with actual target test data, the case of a deterministically modeled weapon used against a statistically responding target. This case matches many of the results of current nuclear targeting modeling and simulation tools, including the result of distance damage functions as complementary cumulative lognormal functions in the range variable. In the third problem, we illustrate the case of a statistically behaving weapon used against a deterministically responding target. In particular, we show the dependence of target damage on weapon yield for an untested nuclear stockpile experiencing yield degradation. Finally, and using actual unclassified weapon test data, we illustrate in the fourth problem the case of a statistically behaving weapon used against a statistically responding target.« less

  7. Singular Behavior of the Leading Lyapunov Exponent of a Product of Random {2 × 2} Matrices

    NASA Astrophysics Data System (ADS)

    Genovese, Giuseppe; Giacomin, Giambattista; Greenblatt, Rafael Leon

    2017-05-01

    We consider a certain infinite product of random {2 × 2} matrices appearing in the solution of some 1 and 1 + 1 dimensional disordered models in statistical mechanics, which depends on a parameter ɛ > 0 and on a real random variable with distribution {μ}. For a large class of {μ}, we prove the prediction by Derrida and Hilhorst (J Phys A 16:2641, 1983) that the Lyapunov exponent behaves like {C ɛ^{2 α}} in the limit {ɛ \\searrow 0}, where {α \\in (0,1)} and {C > 0} are determined by {μ}. Derrida and Hilhorst performed a two-scale analysis of the integral equation for the invariant distribution of the Markov chain associated to the matrix product and obtained a probability measure that is expected to be close to the invariant one for small {ɛ}. We introduce suitable norms and exploit contractivity properties to show that such a probability measure is indeed close to the invariant one in a sense that implies a suitable control of the Lyapunov exponent.

  8. The Timescale-dependent Color Variability of Quasars Viewed with /GALEX

    NASA Astrophysics Data System (ADS)

    Zhu, Fei-Fan; Wang, Jun-Xian; Cai, Zhen-Yi; Sun, Yu-Han

    2016-11-01

    In a recent work by Sun et al., the color variation of quasars, namely the bluer-when-brighter trend, was found to be timescale dependent using the SDSS g/r band light curves in Stripe 82. Such timescale dependence, I.e., bluer variation at shorter timescales, supports the thermal fluctuation origin of the UV/optical variation in quasars, and can be modeled well with the inhomogeneous accretion disk model. In this paper, we extend the study to much shorter wavelengths in the rest frame (down to extreme UV) using GALaxy Evolution eXplorer (GALEX) photometric data of quasars collected in two ultraviolet bands (near-UV and far-UV). We develop Monte Carlo simulations to correct for possible biases due to the considerably larger photometric uncertainties in the GALEX light curves (particularly in the far-UV, compared with the SDSS g/r bands), which otherwise could produce artificial results. We securely confirm the previously discovered timescale dependence of the color variability with independent data sets and at shorter wavelengths. We further find that the slope of the correlation between the amplitude of the color variation and timescale appears even steeper than predicted by the inhomogeneous disk model, which assumes that disk fluctuations follow a damped random walk (DRW) process. The much flatter structure function observed in the far-UV compared with that at longer wavelengths implies deviation from the DRW process in the inner disk, where rest-frame extreme UV radiation is produced.

  9. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  10. Older People’s Perceptions of Pedestrian Friendliness and Traffic Safety: An Experiment Using Computer-Simulated Walking Environments

    PubMed Central

    Kahlert, Daniela; Schlicht, Wolfgang

    2015-01-01

    Traffic safety and pedestrian friendliness are considered to be important conditions for older people’s motivation to walk through their environment. This study uses an experimental study design with computer-simulated living environments to investigate the effect of micro-scale environmental factors (parking spaces and green verges with trees) on older people’s perceptions of both motivational antecedents (dependent variables). Seventy-four consecutively recruited older people were randomly assigned watching one of two scenarios (independent variable) on a computer screen. The scenarios simulated a stroll on a sidewalk, as it is ‘typical’ for a German city. In version ‘A,’ the subjects take a fictive walk on a sidewalk where a number of cars are parked partially on it. In version ‘B’, cars are in parking spaces separated from the sidewalk by grass verges and trees. Subjects assessed their impressions of both dependent variables. A multivariate analysis of covariance showed that subjects’ ratings on perceived traffic safety and pedestrian friendliness were higher for Version ‘B’ compared to version ‘A’. Cohen’s d indicates medium (d = 0.73) and large (d = 1.23) effect sizes for traffic safety and pedestrian friendliness, respectively. The study suggests that elements of the built environment might affect motivational antecedents of older people’s walking behavior. PMID:26308026

  11. How do doctors choose where they want to work? - motives for choice of current workplace among physicians registered in Finland 1977-2006.

    PubMed

    Heikkilä, Teppo Juhani; Hyppölä, Harri; Aine, Tiina; Halila, Hannu; Vänskä, Jukka; Kujala, Santero; Virjo, Irma; Mattila, Kari

    2014-02-01

    Though there are a number of studies investigating the career choices of physicians, there are only few concerning doctors' choices of workplace. A random sample (N=7758) of physicians licensed in Finland during the years 1977-2006 was surveyed. Respondents were asked: "To what extent did the following motives affect your choice of your current workplace?" Respondents were grouped based on several background variables. The groups were used as independent variables in univariate analysis of covariance (ANCOVA). The factors Good workplace, Career and professional development, Non-work related issues, Personal contacts and Salary were formed and used as dependent variables. There were significant differences between groups of physicians, especially in terms of gender, working sector and specialties. The association of Good workplace, Career and professional development, and Non-work related issues with the choice of a workplace significantly decreased with age. Female physicians were more concerned with Career and professional development and Non-work related issues. Since more females are entering the medical profession and there is an ongoing change of generations, health care organizations and policy makers need to develop a new philosophy in order to attract physicians. This will need to include more human-centric management and leadership, better possibilities for continuous professional development, and more personalized working arrangements depending on physician's personal motives. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Multilevel modeling and panel data analysis in educational research (Case study: National examination data senior high school in West Java)

    NASA Astrophysics Data System (ADS)

    Zulvia, Pepi; Kurnia, Anang; Soleh, Agus M.

    2017-03-01

    Individual and environment are a hierarchical structure consist of units grouped at different levels. Hierarchical data structures are analyzed based on several levels, with the lowest level nested in the highest level. This modeling is commonly call multilevel modeling. Multilevel modeling is widely used in education research, for example, the average score of National Examination (UN). While in Indonesia UN for high school student is divided into natural science and social science. The purpose of this research is to develop multilevel and panel data modeling using linear mixed model on educational data. The first step is data exploration and identification relationships between independent and dependent variable by checking correlation coefficient and variance inflation factor (VIF). Furthermore, we use a simple model approach with highest level of the hierarchy (level-2) is regency/city while school is the lowest of hierarchy (level-1). The best model was determined by comparing goodness-of-fit and checking assumption from residual plots and predictions for each model. Our finding that for natural science and social science, the regression with random effects of regency/city and fixed effects of the time i.e multilevel model has better performance than the linear mixed model in explaining the variability of the dependent variable, which is the average scores of UN.

  13. Spatial analysis of macro-level bicycle crashes using the class of conditional autoregressive models.

    PubMed

    Saha, Dibakar; Alluri, Priyanka; Gan, Albert; Wu, Wanyang

    2018-02-21

    The objective of this study was to investigate the relationship between bicycle crash frequency and their contributing factors at the census block group level in Florida, USA. Crashes aggregated over the census block groups tend to be clustered (i.e., spatially dependent) rather than randomly distributed. To account for the effect of spatial dependence across the census block groups, the class of conditional autoregressive (CAR) models were employed within the hierarchical Bayesian framework. Based on four years (2011-2014) of crash data, total and fatal-and-severe injury bicycle crash frequencies were modeled as a function of a large number of variables representing demographic and socio-economic characteristics, roadway infrastructure and traffic characteristics, and bicycle activity characteristics. This study explored and compared the performance of two CAR models, namely the Besag's model and the Leroux's model, in crash prediction. The Besag's models, which differ from the Leroux's models by the structure of how spatial autocorrelation are specified in the models, were found to fit the data better. A 95% Bayesian credible interval was selected to identify the variables that had credible impact on bicycle crashes. A total of 21 variables were found to be credible in the total crash model, while 18 variables were found to be credible in the fatal-and-severe injury crash model. Population, daily vehicle miles traveled, age cohorts, household automobile ownership, density of urban roads by functional class, bicycle trip miles, and bicycle trip intensity had positive effects in both the total and fatal-and-severe crash models. Educational attainment variables, truck percentage, and density of rural roads by functional class were found to be negatively associated with both total and fatal-and-severe bicycle crash frequencies. Published by Elsevier Ltd.

  14. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  15. Mid- and Long-Term Efficacy of Non-Invasive Ventilation in Obesity Hypoventilation Syndrome: The Pickwick's Study.

    PubMed

    López-Jiménez, María José; Masa, Juan F; Corral, Jaime; Terán, Joaquín; Ordaz, Estrella; Troncoso, Maria F; González-Mangado, Nicolás; González, Mónica; Lopez-Martínez, Soledad; De Lucas, Pilar; Marín, José M; Martí, Sergi; Díaz-Cambriles, Trinidad; Díaz-de-Atauri, Josefa; Chiner, Eusebi; Aizpuru, Felipe; Egea, Carlos; Romero, Auxiliadora; Benítez, José M; Sánchez-Gómez, Jesús; Golpe, Rafael; Santiago-Recuerda, Ana; Gómez, Silvia; Barbe, Ferrán; Bengoa, Mónica

    2016-03-01

    The Pickwick project was a prospective, randomized and controlled study, which addressed the issue of obesity hypoventilation syndrome (OHS), a growing problem in developed countries. OHS patients were divided according to apnea-hypopnea index (AHI) ≥30 and <30 determined by polysomnography. The group with AHI≥30 was randomized to intervention with lifestyle changes, noninvasive ventilation (NIV) or continuous positive airway pressure (CPAP); the group with AHI<30 received NIV or lifestyle changes. The aim of the study was to evaluate the efficacy of NIV treatment, CPAP and lifestyle changes (control) in the medium and long-term management of patients with OHS. The primary variables were PaCO2 and days of hospitalization, and operating variables were the percentage of dropouts for medical reasons and mortality. Secondary medium-term objectives were: (i)to evaluate clinical-functional effectiveness on quality of life, echocardiographic and polysomnographic variables; (ii)to investigate the importance of apneic events and leptin in the pathogenesis of daytime alveolar hypoventilation and change according to the different treatments; (ii)to investigate whether metabolic, biochemical and vascular endothelial dysfunction disorders depend on the presence of apneas and hypopneasm and (iv)changes in inflammatory markers and endothelial damage according to treatment. Secondary long-term objectives were to evaluate: (i)clinical and functional effectiveness and quality of life with NIV and CPAP; (ii)changes in leptin, inflammatory markers and endothelial damage according to treatment; (iii)changes in pulmonary hypertension and other echocardiographic variables, as well as blood pressure and incidence of cardiovascular events, and (iv)dropout rate and mortality. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  16. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  17. Ising Critical Behavior of Inhomogeneous Curie-Weiss Models and Annealed Random Graphs

    NASA Astrophysics Data System (ADS)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa

    2016-11-01

    We study the critical behavior for inhomogeneous versions of the Curie-Weiss model, where the coupling constant {J_{ij}(β)} for the edge {ij} on the complete graph is given by {J_{ij}(β)=β w_iw_j/( {sum_{kin[N]}w_k})}. We call the product form of these couplings the rank-1 inhomogeneous Curie-Weiss model. This model also arises [with inverse temperature {β} replaced by {sinh(β)} ] from the annealed Ising model on the generalized random graph. We assume that the vertex weights {(w_i)_{iin[N]}} are regular, in the sense that their empirical distribution converges and the second moment converges as well. We identify the critical temperatures and exponents for these models, as well as a non-classical limit theorem for the total spin at the critical point. These depend sensitively on the number of finite moments of the weight distribution. When the fourth moment of the weight distribution converges, then the critical behavior is the same as on the (homogeneous) Curie-Weiss model, so that the inhomogeneity is weak. When the fourth moment of the weights converges to infinity, and the weights satisfy an asymptotic power law with exponent {τ} with {τin(3,5)}, then the critical exponents depend sensitively on {τ}. In addition, at criticality, the total spin {S_N} satisfies that {S_N/N^{(τ-2)/(τ-1)}} converges in law to some limiting random variable whose distribution we explicitly characterize.

  18. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    USGS Publications Warehouse

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  19. Perturbed effects at radiation physics

    NASA Astrophysics Data System (ADS)

    Külahcı, Fatih; Şen, Zekâi

    2013-09-01

    Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.

  20. State Estimation Using Dependent Evidence Fusion: Application to Acoustic Resonance-Based Liquid Level Measurement.

    PubMed

    Xu, Xiaobin; Li, Zhenghui; Li, Guo; Zhou, Zhe

    2017-04-21

    Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in some practical applications, engineers can only get the range of noises, instead of the precise statistical distributions. Hence, in the framework of Dempster-Shafer (DS) evidence theory, a novel state estimatation method by fusing dependent evidence generated from state equation, observation equation and the actual observations of the system states considering bounded noises is presented. It can be iteratively implemented to provide state estimation values calculated from fusion results at every time step. Finally, the proposed method is applied to a low-frequency acoustic resonance level gauge to obtain high-accuracy measurement results.

  1. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  2. Point process analyses of variations in smoking rate by setting, mood, gender, and dependence

    PubMed Central

    Shiffman, Saul; Rathbun, Stephen L.

    2010-01-01

    The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683

  3. [Discriminating power of socio-demographic and psychological variables on addictive use of cellular phones among middle school students].

    PubMed

    Lee, Haejung; Kim, Myoung Soo; Son, Hyun Kyung; Ahn, Sukhee; Kim, Jung Soon; Kim, Young Hae

    2007-10-01

    The purpose of this study was to examine the degrees of cellular phone usage among middle school students and to identify discriminating factors of addictive use of cellular phones among sociodemographic and psychological variables. From 123 middle schools in Busan, potential participants were identified through stratified random sampling and 747 middle school students participated in the study. The data was collected from December 1, 2004 to December 30, 2004. Descriptive and discriminant analyses were used. Fifty seven percent of the participants were male and 89.7% used cellular phones at school. The participants were grouped into three groups depending on the levels of the cellular phone usage: addicted (n=117), dependent (n=418), non-addicted (n=212). Within the three groups, two functions were produced and only one function was significant, discriminating the addiction group from non-addiction group. Additional discriminant analysis with only two groups produced one function that classified 81.2% of the participants correctly into the two groups. Impulsiveness, anxiety, and stress were significant discriminating factors. Based on the findings of this study, developing intervention programs focusing on impulsiveness, anxiety and stress to reduce the possible addictive use of cellular phones is suggested.

  4. How Far Is Quasar UV/Optical Variability from a Damped Random Walk at Low Frequency?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo Hengxiao; Wang Junxian; Cai Zhenyi

    Studies have shown that UV/optical light curves of quasars can be described using the prevalent damped random walk (DRW) model, also known as the Ornstein–Uhlenbeck process. A white noise power spectral density (PSD) is expected at low frequency in this model; however, a direct observational constraint to the low-frequency PSD slope is difficult due to the limited lengths of the light curves available. Meanwhile, quasars show scatter in their DRW parameters that is too large to be attributed to uncertainties in the measurements and dependence on the variation of known physical factors. In this work we present simulations showing that,more » if the low-frequency PSD deviates from the DRW, the red noise leakage can naturally produce large scatter in the variation parameters measured from simulated light curves. The steeper the low-frequency PSD slope, the larger scatter we expect. Based on observations of SDSS Stripe 82 quasars, we find that the low-frequency PSD slope should be no steeper than −1.3. The actual slope could be flatter, which consequently requires that the quasar variabilities should be influenced by other unknown factors. We speculate that the magnetic field and/or metallicity could be such additional factors.« less

  5. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  6. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  7. Modeling of Internet Influence on Group Emotion

    NASA Astrophysics Data System (ADS)

    Czaplicka, Agnieszka; Hołyst, Janusz A.

    Long-range interactions are introduced to a two-dimensional model of agents with time-dependent internal variables ei = 0, ±1 corresponding to valencies of agent emotions. Effects of spontaneous emotion emergence and emotional relaxation processes are taken into account. The valence of agent i depends on valencies of its four nearest neighbors but it is also influenced by long-range interactions corresponding to social relations developed for example by Internet contacts to a randomly chosen community. Two types of such interactions are considered. In the first model the community emotional influence depends only on the sign of its temporary emotion. When the coupling parameter approaches a critical value a phase transition takes place and as result for larger coupling constants the mean group emotion of all agents is nonzero over long time periods. In the second model the community influence is proportional to magnitude of community average emotion. The ordered emotional phase was here observed for a narrow set of system parameters.

  8. MODELING THE TIME VARIABILITY OF SDSS STRIPE 82 QUASARS AS A DAMPED RANDOM WALK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacLeod, C. L.; Ivezic, Z.; Bullock, E.

    2010-10-01

    We model the time variability of {approx}9000 spectroscopically confirmed quasars in SDSS Stripe 82 as a damped random walk (DRW). Using 2.7 million photometric measurements collected over 10 yr, we confirm the results of Kelly et al. and Kozlowski et al. that this model can explain quasar light curves at an impressive fidelity level (0.01-0.02 mag). The DRW model provides a simple, fast (O(N) for N data points), and powerful statistical description of quasar light curves by a characteristic timescale ({tau}) and an asymptotic rms variability on long timescales (SF{sub {infinity}}). We searched for correlations between these two variability parametersmore » and physical parameters such as luminosity and black hole mass, and rest-frame wavelength. Our analysis shows SF{sub {infinity}} to increase with decreasing luminosity and rest-frame wavelength as observed previously, and without a correlation with redshift. We find a correlation between SF{sub {infinity}} and black hole mass with a power-law index of 0.18 {+-} 0.03, independent of the anti-correlation with luminosity. We find that {tau} increases with increasing wavelength with a power-law index of 0.17, remains nearly constant with redshift and luminosity, and increases with increasing black hole mass with a power-law index of 0.21 {+-} 0.07. The amplitude of variability is anti-correlated with the Eddington ratio, which suggests a scenario where optical fluctuations are tied to variations in the accretion rate. However, we find an additional dependence on luminosity and/or black hole mass that cannot be explained by the trend with Eddington ratio. The radio-loudest quasars have systematically larger variability amplitudes by about 30%, when corrected for the other observed trends, while the distribution of their characteristic timescale is indistinguishable from that of the full sample. We do not detect any statistically robust differences in the characteristic timescale and variability amplitude between the full sample and the small subsample of quasars detected by ROSAT. Our results provide a simple quantitative framework for generating mock quasar light curves, such as currently used in LSST image simulations.« less

  9. Anger, frustration, boredom and the Department of Motor Vehicles: Can negative emotions impede organ donor registration?

    PubMed

    Siegel, Jason T; Tan, Cara N; Rosenberg, Benjamin D; Navarro, Mario A; Thomson, Andrew L; Lyrintzis, Elena A; Alvaro, Eusebio M; Jones, Natalie D

    2016-03-01

    The IIFF Model (Information, Immediate and Complete Registration Mechanism, Focused Engagement, Favorable Activation) offers a checklist of considerations for interventions seeking to influence organ donor registration behavior. One aspect of the model, favorable activation, recommends considering the emotional and motivational state of a potential donor registrant. Given that most donor registrations occur at the Department of Motor Vehicles (DMV), we considered whether emotions experienced while at the DMV could influence registration rates. The current research effort investigated the emotions people experience while visiting the DMV, explored whether these emotions are associated with donor registration intentions, and experimentally assessed whether DMV experiences influence donor registration. Three studies were conducted through Amazon's Mechanical Turk. In Study 1, we randomly assigned participants to either recall a prior DMV experience or to a comparison condition. Emotions associated with the recalled experiences were the dependent variable. Study 2 assessed the correlations between nine different emotions and donor registration intentions. Study 3 randomly assigned participants to recall a prior frustrating DMV experience or to a comparison condition. Intention to register to donate was the dependent variable. Study 1 found that recalling a prior DMV experience was associated with more negative and less positive emotions than the comparison condition. Study 2 found that increased levels of negative emotion could be problematic, as negative emotions were associated with decreased donor intentions. Study 3 found that recalling a frustrating DMV experience resulted in significantly lower intentions to register as an organ donor (vs. a control condition). Although not all DMV experiences are negative, these data indicated a relationship between the DMV and negative emotions; an association between negative emotions and lower donor registration intentions; and, a causal relationship between negative DMV experiences and decreased registration intentions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Effects of racial and ethnic group and health literacy on responses to genomic risk information in a medically underserved population.

    PubMed

    Kaphingst, Kimberly A; Stafford, Jewel D; McGowan, Lucy D'Agostino; Seo, Joann; Lachance, Christina R; Goodman, Melody S

    2015-02-01

    Few studies have examined how individuals respond to genomic risk information for common, chronic diseases. This randomized study examined differences in responses by type of genomic information (genetic test/family history) and disease condition (diabetes/heart disease), and by race/ethnicity in a medically underserved population. 1,057 English-speaking adults completed a survey containing 1 of 4 vignettes (2-by-2 randomized design). Differences in dependent variables (i.e., interest in receiving genomic assessment, discussing with doctor or family, changing health habits) by experimental condition and race/ethnicity were examined using chi-squared tests and multivariable regression analysis. No significant differences were found in dependent variables by type of genomic information or disease condition. In multivariable models, Hispanics were more interested in receiving a genomic assessment than Whites (OR = 1.93; p < .0001); respondents with marginal (OR = 1.54; p = .005) or limited (OR = 1.85; p = .009) health literacy had greater interest than those with adequate health literacy. Blacks (OR = 1.78; p = .001) and Hispanics (OR = 1.85; p = .001) had greater interest in discussing information with family than Whites. Non-Hispanic Blacks (OR = 1.45; p = .04) had greater interest in discussing genomic information with a doctor than Whites. Blacks (β = -0.41; p < .001) and Hispanics (β = -0.25; p = .033) intended to change fewer health habits than Whites; health literacy was negatively associated with number of health habits participants intended to change. Findings suggest that race/ethnicity may affect responses to genomic risk information. Additional research could examine how cognitive representations of this information differ across racial/ethnic groups. Health literacy is also critical to consider in developing approaches to communicating genomic information.

  11. Territory surveillance and prey management: Wolves keep track of space and time.

    PubMed

    Schlägel, Ulrike E; Merrill, Evelyn H; Lewis, Mark A

    2017-10-01

    Identifying behavioral mechanisms that underlie observed movement patterns is difficult when animals employ sophisticated cognitive-based strategies. Such strategies may arise when timing of return visits is important, for instance to allow for resource renewal or territorial patrolling. We fitted spatially explicit random-walk models to GPS movement data of six wolves ( Canis lupus ; Linnaeus, 1758) from Alberta, Canada to investigate the importance of the following: (1) territorial surveillance likely related to renewal of scent marks along territorial edges, to reduce intraspecific risk among packs, and (2) delay in return to recently hunted areas, which may be related to anti-predator responses of prey under varying prey densities. The movement models incorporated the spatiotemporal variable "time since last visit," which acts as a wolf's memory index of its travel history and is integrated into the movement decision along with its position in relation to territory boundaries and information on local prey densities. We used a model selection framework to test hypotheses about the combined importance of these variables in wolf movement strategies. Time-dependent movement for territory surveillance was supported by all wolf movement tracks. Wolves generally avoided territory edges, but this avoidance was reduced as time since last visit increased. Time-dependent prey management was weak except in one wolf. This wolf selected locations with longer time since last visit and lower prey density, which led to a longer delay in revisiting high prey density sites. Our study shows that we can use spatially explicit random walks to identify behavioral strategies that merge environmental information and explicit spatiotemporal information on past movements (i.e., "when" and "where") to make movement decisions. The approach allows us to better understand cognition-based movement in relation to dynamic environments and resources.

  12. MHC-correlated mate choice in humans: a review.

    PubMed

    Havlicek, Jan; Roberts, S Craig

    2009-05-01

    Extremely high variability in genes of the major histocompatibility complex (MHC) in vertebrates is assumed to be a consequence of frequency-dependent parasite-driven selection and mate preferences based on promotion of offspring heterozygosity at MHC, or potentially, genome-wide inbreeding avoidance. Where effects have been found, mate choice studies on rodents and other species usually find preference for MHC-dissimilarity in potential partners. Here we critically review studies on MHC-associated mate choice in humans. These are based on three broadly different aspects: (1) odor preferences, (2) facial preferences and (3) actual mate choice surveys. As in animal studies, most odor-based studies demonstrate disassortative preferences, although there is variation in the strength and nature of the effects. In contrast, facial attractiveness research indicates a preference for MHC-similar individuals. Results concerning MHC in actual couples show a bias towards similarity in one study, dissimilarity in two studies and random distribution in several other studies. These vary greatly in sample size and heterogeneity of the sample population, both of which may significantly bias the results. This pattern of mixed results across studies may reflect context-dependent and/or life history sensitive preference expression, in addition to higher level effects arising out of population differences in genetic heterogeneity or cultural and ethnic restrictions on random mating patterns. Factors of special relevance in terms of individual preferences are reproductive status and long- vs. short-term mating context. We discuss the idea that olfactory and visual channels may work in a complementary way (i.e. odor preference for MHC-dissimilarity and visual preference for MHC-similarity) to achieve an optimal level of genetic variability, methodological issues and interesting avenues for further research.

  13. Study the Cyclic Plasticity Behavior of 508 LAS under Constant, Variable and Grid-Load-Following Loading Cycles for Fatigue Evaluation of PWR Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Subhasish; Barua, Bipul; Soppet, William K.

    This report provides an update of an earlier assessment of environmentally assisted fatigue for components in light water reactors. This report is a deliverable in September 2016 under the work package for environmentally assisted fatigue under DOE’s Light Water Reactor Sustainability program. In an April 2016 report, we presented a detailed thermal-mechanical stress analysis model for simulating the stress-strain state of a reactor pressure vessel and its nozzles under grid-load-following conditions. In this report, we provide stress-controlled fatigue test data for 508 LAS base metal alloy under different loading amplitudes (constant, variable, and random grid-load-following) and environmental conditions (in airmore » or pressurized water reactor coolant water at 300°C). Also presented is a cyclic plasticity-based analytical model that can simultaneously capture the amplitude and time dependency of the component behavior under fatigue loading. Results related to both amplitude-dependent and amplitude-independent parameters are presented. The validation results for the analytical/mechanistic model are discussed. This report provides guidance for estimating time-dependent, amplitude-independent parameters related to material behavior under different service conditions. The developed mechanistic models and the reported material parameters can be used to conduct more accurate fatigue and ratcheting evaluation of reactor components.« less

  14. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  15. Quantifying variability in delta experiments

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. R.; McElroy, B. J.

    2017-12-01

    Large populations of people and wildlife make their homes on river deltas, therefore it is important to be able to make useful and accurate predictions of how these landforms will change over time. However, making predictions can be a challenge due to inherent variability of the natural system. Furthermore, when we extrapolate results from the laboratory to the field setting, we bring with it random and systematic errors of the experiment. We seek to understand both the intrinsic and experimental variability of river delta systems to help better inform predictions of how these landforms will evolve. We run exact replicates of experiments with steady sediment and water discharge and record delta evolution with overhead time lapse imaging. We measure aspects of topset progradation and channel dynamics and compare these metrics of delta morphology between the 6 replicated experimental runs. We also use data from all experimental runs collectively to build a large dataset to extract statistics of the system properties. We find that although natural variability exists, the processes in the experiments must have outcomes that no longer depend on their initial conditions after some time. Applying these results to the field scale will aid in our ability to make forecasts of how these landforms will progress.

  16. A respiratory alert model for the Shenandoah Valley, Virginia, USA

    NASA Astrophysics Data System (ADS)

    Hondula, David M.; Davis, Robert E.; Knight, David B.; Sitka, Luke J.; Enfield, Kyle; Gawtry, Stephen B.; Stenger, Phillip J.; Deaton, Michael L.; Normile, Caroline P.; Lee, Temple R.

    2013-01-01

    Respiratory morbidity (particularly COPD and asthma) can be influenced by short-term weather fluctuations that affect air quality and lung function. We developed a model to evaluate meteorological conditions associated with respiratory hospital admissions in the Shenandoah Valley of Virginia, USA. We generated ensembles of classification trees based on six years of respiratory-related hospital admissions (64,620 cases) and a suite of 83 potential environmental predictor variables. As our goal was to identify short-term weather linkages to high admission periods, the dependent variable was formulated as a binary classification of five-day moving average respiratory admission departures from the seasonal mean value. Accounting for seasonality removed the long-term apparent inverse relationship between temperature and admissions. We generated eight total models specific to the northern and southern portions of the valley for each season. All eight models demonstrate predictive skill (mean odds ratio = 3.635) when evaluated using a randomization procedure. The predictor variables selected by the ensembling algorithm vary across models, and both meteorological and air quality variables are included. In general, the models indicate complex linkages between respiratory health and environmental conditions that may be difficult to identify using more traditional approaches.

  17. Inequality, income, and poverty: comparative global evidence.

    PubMed

    Fosu, Augustin Kwasi

    2010-01-01

    Objectives. The study seeks to provide comparative global evidence on the role of income inequality, relative to income growth, in poverty reduction.Methods. An analysis-of-covariance model is estimated using a large global sample of 1980–2004 unbalanced panel data, with the headcount measure of poverty as the dependent variable, and the Gini coefficient and PPP-adjusted mean income as explanatory variables. Both random-effects and fixed-effects methods are employed in the estimation.Results. The responsiveness of poverty to income is a decreasing function of inequality, and the inequality elasticity of poverty is actually larger than the income elasticity of poverty. Furthermore, there is a large variation across regions (and countries) in the relative effects of inequality on poverty.Conclusion. Income distribution plays a more important role than might be traditionally acknowledged in poverty reduction, though this importance varies widely across regions and countries.

  18. A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.; Irvine, Tom

    2013-01-01

    A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.

  19. Logit-normal mixed model for Indian Monsoon rainfall extremes

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-03-01

    Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.

  20. On nonstationarity and antipersistency in global temperature series

    NASA Astrophysics Data System (ADS)

    KäRner, O.

    2002-10-01

    Statistical analysis is carried out for satellite-based global daily tropospheric and stratospheric temperature anomaly and solar irradiance data sets. Behavior of the series appears to be nonstationary with stationary daily increments. Estimating long-range dependence between the increments reveals a remarkable difference between the two temperature series. Global average tropospheric temperature anomaly behaves similarly to the solar irradiance anomaly. Their daily increments show antipersistency for scales longer than 2 months. The property points at a cumulative negative feedback in the Earth climate system governing the tropospheric variability during the last 22 years. The result emphasizes a dominating role of the solar irradiance variability in variations of the tropospheric temperature and gives no support to the theory of anthropogenic climate change. The global average stratospheric temperature anomaly proceeds like a 1-dim random walk at least up to 11 years, allowing good presentation by means of the autoregressive integrated moving average (ARIMA) models for monthly series.

  1. Equity in specialist waiting times by socioeconomic groups: evidence from Spain.

    PubMed

    Abásolo, Ignacio; Negrín-Hernández, Miguel A; Pinilla, Jaime

    2014-04-01

    In countries with publicly financed health care systems, waiting time--rather than price--is the rationing mechanism for access to health care services. The normative statement underlying such a rationing device is that patients should wait according to need and irrespective of socioeconomic status or other non-need characteristics. The aim of this paper is to test empirically that waiting times for publicly funded specialist care do not depend on patients' socioeconomic status. Waiting times for specialist care can vary according to the type of medical specialty, type of consultation (review or diagnosis) and the region where patients' reside. In order to take into account such variability, we use Bayesian random parameter models to explain waiting times for specialist care in terms of need and non-need variables. We find that individuals with lower education and income levels wait significantly more time than their counterparts.

  2. Considerations of multiple imputation approaches for handling missing data in clinical trials.

    PubMed

    Quan, Hui; Qi, Li; Luo, Xiaodong; Darchy, Loic

    2018-07-01

    Missing data exist in all clinical trials and missing data issue is a very serious issue in terms of the interpretability of the trial results. There is no universally applicable solution for all missing data problems. Methods used for handling missing data issue depend on the circumstances particularly the assumptions on missing data mechanisms. In recent years, if the missing at random mechanism cannot be assumed, conservative approaches such as the control-based and returning to baseline multiple imputation approaches are applied for dealing with the missing data issues. In this paper, we focus on the variability in data analysis of these approaches. As demonstrated by examples, the choice of the variability can impact the conclusion of the analysis. Besides the methods for continuous endpoints, we also discuss methods for binary and time to event endpoints as well as consideration for non-inferiority assessment. Copyright © 2018. Published by Elsevier Inc.

  3. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  4. A double hit model for the distribution of time to AIDS onset

    NASA Astrophysics Data System (ADS)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  5. On the comparison of the strength of morphological integration across morphometric datasets.

    PubMed

    Adams, Dean C; Collyer, Michael L

    2016-11-01

    Evolutionary morphologists frequently wish to understand the extent to which organisms are integrated, and whether the strength of morphological integration among subsets of phenotypic variables differ among taxa or other groups. However, comparisons of the strength of integration across datasets are difficult, in part because the summary measures that characterize these patterns (RV coefficient and r PLS ) are dependent both on sample size and on the number of variables. As a solution to this issue, we propose a standardized test statistic (a z-score) for measuring the degree of morphological integration between sets of variables. The approach is based on a partial least squares analysis of trait covariation, and its permutation-based sampling distribution. Under the null hypothesis of a random association of variables, the method displays a constant expected value and confidence intervals for datasets of differing sample sizes and variable number, thereby providing a consistent measure of integration suitable for comparisons across datasets. A two-sample test is also proposed to statistically determine whether levels of integration differ between datasets, and an empirical example examining cranial shape integration in Mediterranean wall lizards illustrates its use. Some extensions of the procedure are also discussed. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  6. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  7. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  8. Correlated resistive/capacitive state variability in solid TiO2 based memory devices

    NASA Astrophysics Data System (ADS)

    Li, Qingjiang; Salaoru, Iulia; Khiat, Ali; Xu, Hui; Prodromakis, Themistoklis

    2017-05-01

    In this work, we experimentally demonstrated the correlated resistive/capacitive switching and state variability in practical TiO2 based memory devices. Based on filamentary functional mechanism, we argue that the impedance state variability stems from the randomly distributed defects inside the oxide bulk. Finally, our assumption was verified via a current percolation circuit model, by taking into account of random defects distribution and coexistence of memristor and memcapacitor.

  9. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  10. Testing Pairwise Association between Spatially Autocorrelated Variables: A New Approach Using Surrogate Lattice Data

    PubMed Central

    Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre

    2012-01-01

    Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961

  11. Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models

    USGS Publications Warehouse

    Geist, Eric L.

    2016-01-01

    Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ∼ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20–30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.

  12. One-Year Efficacy Testing of Enabling Mothers to Prevent Pediatric Obesity Through Web-Based Education and Reciprocal Determinism (EMPOWER) Randomized Control Trial.

    PubMed

    Knowlden, Adam; Sharma, Manoj

    2016-02-01

    The purpose of this study was to evaluate the efficacy of the Enabling Mothers to Prevent Pediatric Obesity through Web-Based Education and Reciprocal Determinism (EMPOWER) intervention at 1-year, postintervention follow-up. A mixed between-within subjects design was used to evaluate the trial. Independent variables included a two-level, group assignment: EMPOWER (experimental intervention) based on social cognitive theory (SCT) as well as a knowledge-based intervention Healthy Lifestyles (active control intervention). Dependent variables were evaluated across four levels of time: baseline (Week 0), posttest (Week 4), 1-month follow-up (Week 8), and 1-year follow-up (Week 60). Dependent variables included five maternal-facilitated SCT constructs (environment, emotional coping, expectations, self-control, and self-efficacy) as well as four child behaviors (minutes of child physical activity, cups of fruits and vegetables consumed, 8-ounce glasses of sugar-sweetened beverages consumed, and minutes of screen time). Null hypotheses implied no significant group-by-time interactions for the dependent variables under investigation. A significant group-by-time interaction for child fruit and vegetable consumption was found in the experimental group (p = .012) relative to the control group. At 1 year, results suggested an overall increase of 1.847 cups of fruits and vegetables (95% confidence interval = 1.207-2.498) in the experimental group (p < .001). Analysis suggested changes in the maternal-facilitated home environment accounted for 13.3% of the variance in the change in child fruit and vegetable consumption. Improvements in child physical activity, sugar-free beverage intake, and screen time first detected at 1-month follow-up in both groups were no longer significant at 1-year follow-up. An online family-and-home-based intervention was efficacious for improving child fruit and vegetable consumption. Follow-up booster sessions may assist in maintaining treatment effects. © 2015 Society for Public Health Education.

  13. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Does the bracket–ligature combination affect the amount of orthodontic space closure over three months? A randomized controlled trial

    PubMed Central

    Wong, Henry; Collins, Jill; Tinsley, David; Sandler, Jonathan; Benson, Philip

    2013-01-01

    Objective: To investigate the effect of bracket–ligature combination on the amount of orthodontic space closure over three months. Design: Randomized clinical trial with three parallel groups. Setting: A hospital orthodontic department (Chesterfield Royal Hospital, UK). Participants: Forty-five patients requiring upper first premolar extractions. Methods: Informed consent was obtained and participants were randomly allocated into one of three groups: (1) conventional pre-adjusted edgewise brackets and elastomeric ligatures; (2) conventional pre-adjusted edgewise brackets and Super Slick® low friction elastomeric ligatures; (3) Damon 3MX® passive self-ligating brackets. Space closure was undertaken on 0·019×0·025-inch stainless steel archwires with nickel–titanium coil springs. Participants were recalled at four weekly intervals. Upper alginate impressions were taken at each visit (maximum three). The primary outcome measure was the mean amount of space closure in a 3-month period. Results: A one-way ANOVA was undertaken [dependent variable: mean space closure (mm); independent variable: group allocation]. The amount of space closure was very similar between the three groups (1 mm per 28 days); however, there was a wide variation in the rate of space closure between individuals. The differences in the amount of space closure over three months between the three groups was very small and non-significant (P = 0·718). Conclusion: The hypothesis that reducing friction by modifying the bracket/ligature interface increases the rate of space closure was not supported. The major determinant of orthodontic tooth movement is probably the individual patient response. PMID:23794696

  15. Systematic Evaluation of the Dependence of Deoxyribozyme Catalysis on Random Region Length

    PubMed Central

    Velez, Tania E.; Singh, Jaydeep; Xiao, Ying; Allen, Emily C.; Wong, On Yi; Chandra, Madhavaiah; Kwon, Sarah C.; Silverman, Scott K.

    2012-01-01

    Functional nucleic acids are DNA and RNA aptamers that bind targets, or they are deoxyribozymes and ribozymes that have catalytic activity. These functional DNA and RNA sequences can be identified from random-sequence pools by in vitro selection, which requires choosing the length of the random region. Shorter random regions allow more complete coverage of sequence space but may not permit the structural complexity necessary for binding or catalysis. In contrast, longer random regions are sampled incompletely but may allow adoption of more complicated structures that enable function. In this study, we systematically examined random region length (N20 through N60) for two particular deoxyribozyme catalytic activities, DNA cleavage and tyrosine-RNA nucleopeptide linkage formation. For both activities, we previously identified deoxyribozymes using only N40 regions. In the case of DNA cleavage, here we found that shorter N20 and N30 regions allowed robust catalytic function, either by DNA hydrolysis or by DNA deglycosylation and strand scission via β-elimination, whereas longer N50 and N60 regions did not lead to catalytically active DNA sequences. Follow-up selections with N20, N30, and N40 regions revealed an interesting interplay of metal ion cofactors and random region length. Separately, for Tyr-RNA linkage formation, N30 and N60 regions provided catalytically active sequences, whereas N20 was unsuccessful, and the N40 deoxyribozymes were functionally superior (in terms of rate and yield) to N30 and N60. Collectively, the results indicate that with future in vitro selection experiments for DNA and RNA catalysts, and by extension for aptamers, random region length should be an important experimental variable. PMID:23088677

  16. On the distribution of a product of N Gaussian random variables

    NASA Astrophysics Data System (ADS)

    Stojanac, Željka; Suess, Daniel; Kliesch, Martin

    2017-08-01

    The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.

  17. Land area change and fractional water maps in the Chenier Plain, Louisiana, following hurricane Rita

    NASA Astrophysics Data System (ADS)

    Palaseanu-Lovejoy, M.; Kranenburg, C.; Brock, J. C.

    2009-12-01

    The objective of this study is to develop a fractional water map at 30-m resolution scale using QuickBird and/or IKONOS high-resolution imagery as dependent variable to investigate the impact of hurricane Rita in the Chenier Plain, Louisiana. Eleven different indices were tested to obtain a high-resolution land / water classification on QuickBird (acquired on 05/23/2003) and IKONOS (acquired on 03/25/2006) images. The percent area covered by water in the high resolution images varied from 22 to 26% depending on the index used , with the simple ratio index (red band / NIR band) accounting for the lowest percent and the blue ratio index (blue band / sum(all bands)) for the highest percent. Using the ERDAS NLCD (National Land Cover Data) Mapping tool module, 100, 000 stratified random sample points with minimum 1000 points per stratum were selected from the high resolution dependent variable as training information for the independent variable layers. The rules for the regression tree were created using the data mining software Rulequest Cubist v. 2.05. This information was used to generate a fractional water map for the entire Landsat scene. The increase in water areas of about 10 - 15% between 2003 to 2006, as well as temporary changes in the water - land configurations are attributed to remnant flooding and removal of aquatic vegetation caused by hurricane Rita, and water level variations caused by tidal and / or meteorological variations between the acquisition dates of the satellite images. This analysis can assist in monitoring post-hurricane wetland recovery and assess trends in land loss due to extreme storm events, although estimation of permanent land loss cannot be made until wetland areas have the opportunity to recover from hurricane impacts.

  18. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  19. Physical activity levels of older community-dwelling adults are influenced by summer weather variables.

    PubMed

    Brandon, Caitlin A; Gill, Dawn P; Speechley, Mark; Gilliland, Jason; Jones, Gareth R

    2009-04-01

    Adequate daily physical activity (PA) is important for maintaining functional capacity and independence in older adults. However, most older adults in Canada do not engage in enough PA to sustain fitness and functional independence. Environmental influences, such as warmer daytime temperatures, may influence PA participation; however, few studies have examined the effect of summertime temperatures on PA levels in older adults. This investigation measured the influence of summertime weather variables on PA in 48 community-dwelling older adults who were randomly recruited from a local seniors' community centre. Each participant wore an accelerometer for a single 7-consecutive-day period (between 30 May and 9 August 2006) during waking hours, and completed a PA logbook to remark on major daily PA events. Local weather variables were collected from a national weather service and compared with PA counts per minute. Regression analysis revealed a curvilinear relationship between log-transformed PA and mean daily temperature (r2 = 0.025; p < 0.05). Linear mixed effects models that accounted for repeated measures nested within individuals were performed for monthly periods, meteorological variables, sex, age, and estimated maximal oxygen consumption, with PA as the dependent variable. Age and Air Quality Index remained significant variables within the model. Higher fitness levels had no effect on allowing individuals to perform more vigorous PA in warmer temperatures.

  20. Timing matters: change depends on the stage of treatment in cognitive behavioral therapy for panic disorder with agoraphobia.

    PubMed

    Gloster, Andrew T; Klotsche, Jens; Gerlach, Alexander L; Hamm, Alfons; Ströhle, Andreas; Gauggel, Siegfried; Kircher, Tilo; Alpers, Georg W; Deckert, Jürgen; Wittchen, Hans-Ulrich

    2014-02-01

    The mechanisms of action underlying treatment are inadequately understood. This study examined 5 variables implicated in the treatment of panic disorder with agoraphobia (PD/AG): catastrophic agoraphobic cognitions, anxiety about bodily sensations, agoraphobic avoidance, anxiety sensitivity, and psychological flexibility. The relative importance of these process variables was examined across treatment phases: (a) psychoeducation/interoceptive exposure, (b) in situ exposure, and (c) generalization/follow-up. Data came from a randomized controlled trial of cognitive behavioral therapy for PD/AG (n = 301). Outcomes were the Panic and Agoraphobia Scale (Bandelow, 1995) and functioning as measured in the Clinical Global Impression scale (Guy, 1976). The effect of process variables on subsequent change in outcome variables was calculated using bivariate latent difference score modeling. Change in panic symptomatology was preceded by catastrophic appraisal and agoraphobic avoidance across all phases of treatment, by anxiety sensitivity during generalization/follow-up, and by psychological flexibility during exposure in situ. Change in functioning was preceded by agoraphobic avoidance and psychological flexibility across all phases of treatment, by fear of bodily symptoms during generalization/follow-up, and by anxiety sensitivity during exposure. The effects of process variables on outcomes differ across treatment phases and outcomes (i.e., symptomatology vs. functioning). Agoraphobic avoidance and psychological flexibility should be investigated and therapeutically targeted in addition to cognitive variables. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Random parameter models for accident prediction on two-lane undivided highways in India.

    PubMed

    Dinu, R R; Veeraragavan, A

    2011-02-01

    Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Random dopant fluctuations and statistical variability in n-channel junctionless FETs

    NASA Astrophysics Data System (ADS)

    Akhavan, N. D.; Umana-Membreno, G. A.; Gu, R.; Antoszewski, J.; Faraone, L.

    2018-01-01

    The influence of random dopant fluctuations on the statistical variability of the electrical characteristics of n-channel silicon junctionless nanowire transistor (JNT) has been studied using three dimensional quantum simulations based on the non-equilibrium Green’s function (NEGF) formalism. Average randomly distributed body doping densities of 2 × 1019, 6 × 1019 and 1 × 1020 cm-3 have been considered employing an atomistic model for JNTs with gate lengths of 5, 10 and 15 nm. We demonstrate that by properly adjusting the doping density in the JNT, a near ideal statistical variability and electrical performance can be achieved, which can pave the way for the continuation of scaling in silicon CMOS technology.

  3. Modeling the influence of preferential flow on the spatial variability and time-dependence of mineral weathering rates

    DOE PAGES

    Pandey, Sachin; Rajaram, Harihar

    2016-12-05

    Inferences of weathering rates from laboratory and field observations suggest significant scale and time-dependence. Preferential flow induced by heterogeneity (manifest as permeability variations or discrete fractures) has been suggested as one potential mechanism causing scale/time-dependence. In this paper, we present a quantitative evaluation of the influence of preferential flow on weathering rates using reactive transport modeling. Simulations were performed in discrete fracture networks (DFNs) and correlated random permeability fields (CRPFs), and compared to simulations in homogeneous permeability fields. The simulations reveal spatial variability in the weathering rate, multidimensional distribution of reactions zones, and the formation of rough weathering interfaces andmore » corestones due to preferential flow. In the homogeneous fields and CRPFs, the domain-averaged weathering rate is initially constant as long as the weathering front is contained within the domain, reflecting equilibrium-controlled behavior. The behavior in the CRPFs was influenced by macrodispersion, with more spread-out weathering profiles, an earlier departure from the initial constant rate and longer persistence of weathering. DFN simulations exhibited a sustained time-dependence resulting from the formation of diffusion-controlled weathering fronts in matrix blocks, which is consistent with the shrinking core mechanism. A significant decrease in the domain-averaged weathering rate is evident despite high remaining mineral volume fractions, but the decline does not follow a math formula dependence, characteristic of diffusion, due to network scale effects and advection-controlled behavior near the inflow boundary. Finally, the DFN simulations also reveal relatively constant horizontally averaged weathering rates over a significant depth range, challenging the very notion of a weathering front.« less

  4. Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns

    PubMed Central

    Lau, Joseph C. Y.; Wong, Patrick C. M.

    2016-01-01

    We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. NEW & NOTEWORTHY Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience. PMID:27832606

  5. Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns.

    PubMed

    Lau, Joseph C Y; Wong, Patrick C M; Chandrasekaran, Bharath

    2017-02-01

    We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience. Copyright © 2017 the American Physiological Society.

  6. Modeling the influence of preferential flow on the spatial variability and time-dependence of mineral weathering rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandey, Sachin; Rajaram, Harihar

    Inferences of weathering rates from laboratory and field observations suggest significant scale and time-dependence. Preferential flow induced by heterogeneity (manifest as permeability variations or discrete fractures) has been suggested as one potential mechanism causing scale/time-dependence. In this paper, we present a quantitative evaluation of the influence of preferential flow on weathering rates using reactive transport modeling. Simulations were performed in discrete fracture networks (DFNs) and correlated random permeability fields (CRPFs), and compared to simulations in homogeneous permeability fields. The simulations reveal spatial variability in the weathering rate, multidimensional distribution of reactions zones, and the formation of rough weathering interfaces andmore » corestones due to preferential flow. In the homogeneous fields and CRPFs, the domain-averaged weathering rate is initially constant as long as the weathering front is contained within the domain, reflecting equilibrium-controlled behavior. The behavior in the CRPFs was influenced by macrodispersion, with more spread-out weathering profiles, an earlier departure from the initial constant rate and longer persistence of weathering. DFN simulations exhibited a sustained time-dependence resulting from the formation of diffusion-controlled weathering fronts in matrix blocks, which is consistent with the shrinking core mechanism. A significant decrease in the domain-averaged weathering rate is evident despite high remaining mineral volume fractions, but the decline does not follow a math formula dependence, characteristic of diffusion, due to network scale effects and advection-controlled behavior near the inflow boundary. Finally, the DFN simulations also reveal relatively constant horizontally averaged weathering rates over a significant depth range, challenging the very notion of a weathering front.« less

  7. Measures of Residual Risk with Connections to Regression, Risk Tracking, Surrogate Models, and Ambiguity

    DTIC Science & Technology

    2015-04-15

    manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original... manage , predict and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved situation faced

  8. Evaluation of visual acuity measurements after autorefraction vs manual refraction in eyes with and without diabetic macular edema.

    PubMed

    Sun, Jennifer K; Qin, Haijing; Aiello, Lloyd Paul; Melia, Michele; Beck, Roy W; Andreoli, Christopher M; Edwards, Paul A; Glassman, Adam R; Pavlica, Michael R

    2012-04-01

    To compare visual acuity (VA) scores after autorefraction vs manual refraction in eyes of patients with diabetes mellitus and a wide range of VAs. The letter score from the Electronic Visual Acuity (EVA) test from the electronic Early Treatment Diabetic Retinopathy Study was measured after autorefraction (AR-EVA score) and after manual refraction (MR-EVA score), which is the research protocol of the Diabetic Retinopathy Clinical Research Network. Testing order was randomized, study participants and VA examiners were masked to refraction source, and a second EVA test using an identical supplemental manual refraction (MR-EVAsuppl score) was performed to determine test-retest variability. In 878 eyes of 456 study participants, the median MR-EVA score was 74 (Snellen equivalent, approximately 20/32). The spherical equivalent was often similar for manual refraction and autorefraction (median difference, 0.00; 5th-95th percentile range, -1.75 to 1.13 diopters). However, on average, the MR-EVA scores were slightly better than the AR-EVA scores, across the entire VA range. Furthermore, the variability between the AR-EVA scores and the MR-EVA scores was substantially greater than the test-retest variability of the MR-EVA scores (P < .001). The variability of differences was highly dependent on the autorefractor model. Across a wide range of VAs at multiple sites using a variety of autorefractors, VA measurements tend to be worse with autorefraction than manual refraction. Differences between individual autorefractor models were identified. However, even among autorefractor models that compare most favorably with manual refraction, VA variability between autorefraction and manual refraction is higher than the test-retest variability of manual refraction. The results suggest that, with current instruments, autorefraction is not an acceptable substitute for manual refraction for most clinical trials with primary outcomes dependent on best-corrected VA.

  9. Method Improving Reading Comprehension In Primary Education Program Students

    NASA Astrophysics Data System (ADS)

    Rohana

    2018-01-01

    This study aims to determine the influence of reading comprehension skills of English for PGSD students through the application of SQ3R learning method. The type of this research is Pre-Experimental research because it is not yet a real experiment, there are external variables that influence the formation of a dependent variable, this is because there is no control variable and the sample is not chosen randomly. The research design is used is one-group pretest-post-test design involving one group that is an experimental group. In this design, the observation is done twice before and after the experiment. Observations made before the experiment (O1) are called pretests and the post-experimental observation (O2) is called posttest. The difference between O1 and O2 ie O2 - O1 is the effect of the treatment. The results showed that there was an improvement in reading comprehension skills of PGSD students in Class M.4.3 using SQ3R method, and better SQ3R enabling SQ3R to improve English comprehension skills.

  10. Demographic and psychological variables affecting test subject evaluations of ride quality

    NASA Technical Reports Server (NTRS)

    Duncan, N. C.; Conley, H. W.

    1975-01-01

    Ride-quality experiments similar in objectives, design, and procedure were conducted, one using the U.S. Air Force Total In-Flight Simulator and the other using the Langley Passenger Ride Quality Apparatus to provide the motion environments. Large samples (80 or more per experiment) of test subjects were recruited from the Tidewater Virginia area and asked to rate the comfort (on a 7-point scale) of random aircraft motion typical of that encountered during STOL flights. Test subject characteristics of age, sex, and previous flying history (number of previous airplane flights) were studied in a two by three by three factorial design. Correlations were computed between one dependent measure, the subject's mean comfort rating, and various demographic characteristics, attitudinal variables, and the scores on Spielberger's State-Trait Anxiety Inventory. An effect of sex was found in one of the studies. Males made higher (more uncomfortable) ratings of the ride than females. Age and number of previous flights were not significantly related to comfort ratings. No significant interactions between the variables of age, sex, or previous number of flights were observed.

  11. Multivariate hydrological frequency analysis for extreme events using Archimedean copula. Case study: Lower Tunjuelo River basin (Colombia)

    NASA Astrophysics Data System (ADS)

    Gómez, Wilmar

    2017-04-01

    By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.

  12. Sums and Products of Jointly Distributed Random Variables: A Simplified Approach

    ERIC Educational Resources Information Center

    Stein, Sheldon H.

    2005-01-01

    Three basic theorems concerning expected values and variances of sums and products of random variables play an important role in mathematical statistics and its applications in education, business, the social sciences, and the natural sciences. A solid understanding of these theorems requires that students be familiar with the proofs of these…

  13. A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.

    ERIC Educational Resources Information Center

    Kraemer, Helena Chmura; Thiemann, Sue

    1989-01-01

    Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…

  14. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    ERIC Educational Resources Information Center

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  15. Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2005-11-01

    We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.

  16. CuidaCare: effectiveness of a nursing intervention on the quality of life’s caregiver: cluster-randomized clinical trial

    PubMed Central

    2014-01-01

    Background In Spain, family is the main source of care for dependent people. Numerous studies suggest that providing informal (unpaid) care during a prolonged period of time results in a morbidity-generating burden. Caregivers constitute a high-risk group that experiences elevated stress levels, which reduce their quality of life. Different strategies have been proposed to improve management of this phenomenon in order to minimize its impact, but definitive conclusions regarding their effectiveness are lacking. Methods/Design A community clinical trial is proposed, with a 1-year follow-up period, that is multicentric, controlled, parallel, and with randomized allocation of clusters in 20 health care centers within the Community of Madrid. The study's objective is to evaluate the effectiveness of a standard care intervention in primary health care (intervention CuidaCare) to improve the quality of life of the caregivers, measured at 0, 6, and 12 months after the intervention. One hundred and forty two subjects (71 from each group) ≥65 years, identified by the nurse as the main caregivers, and who provide consent to participate in the study will be included. The main outcome variable will be perceived quality of life as measured by the Visual Analogue Scale (VAS) of EuroQol-5D (EQ-5D). The secondary outcome variables will be EQ-5D Dimensions, EQ-5D Index, nursing diagnosis, and Zarit's test. Prognostic variables will be recorded for the dependent patient and the caregiver. The principle analysis will be done by comparing the average change in EQ-5D VAS value before and after intervention between the two groups. All statistical tests will be performed as intention-to-treat. Prognostic factors' estimates will be adjusted by mixed-effects regression models. Possible confounding or effect-modifying factors will be taken into account. Discussion Assistance for the caregiver should be integrated into primary care services. In order to do so, incorporating standard, effective interventions with relevant outcome variables such as quality of life is necessary. Community care nurses are at a privileged position to develop interventions like the proposed one. Trial registration This trial has been registered in ClinicalTrials.gov under code number NCT 01478295. PMID:24467767

  17. Chaos Versus Noisy Periodicity: Alternative Hypotheses for Childhood Epidemics

    NASA Astrophysics Data System (ADS)

    Olsen, L. F.; Schaffer, W. M.

    1990-08-01

    Whereas case rates for some childhood diseases (chickenpox) often vary according to an almost regular annual cycle, the incidence of more efficiently transmitted infections such as measles is more variable. Three hypotheses have been proposed to account for such fluctuations. (i) Irregular dynamics result from random shocks to systems with stable equilibria. (ii) The intrinsic dynamics correspond to biennial cycles that are subject to stochastic forcing. (iii) Aperiodic fluctuations are intrinsic to the epidemiology. Comparison of real world data and epidemiological models suggests that measles epidemics are inherently chaotic. Conversely, the extent to which chickenpox outbreaks approximate a yearly cycle depends inversely on the population size.

  18. Efficient image projection by Fourier electroholography.

    PubMed

    Makowski, Michał; Ducin, Izabela; Kakarenko, Karol; Kolodziejczyk, Andrzej; Siemion, Agnieszka; Siemion, Andrzej; Suszek, Jaroslaw; Sypek, Maciej; Wojnowski, Dariusz

    2011-08-15

    An improved efficient projection of color images is presented. It uses a phase spatial light modulator with three iteratively optimized Fourier holograms displayed simultaneously--each for one primary color. This spatial division instead of time division provides stable images. A pixelated structure of the modulator and fluctuations of liquid crystal molecules cause a zeroth-order peak, eliminated by additional wavelength-dependent phase factors shifting it before the image plane, where it is blocked with a matched filter. Speckles are suppressed by time integration of variable speckle patterns generated by additional randomizations of an initial phase and minor changes of the signal. © 2011 Optical Society of America

  19. A use of regression analysis in acoustical diagnostics of gear drives

    NASA Technical Reports Server (NTRS)

    Balitskiy, F. Y.; Genkin, M. D.; Ivanova, M. A.; Kobrinskiy, A. A.; Sokolova, A. G.

    1973-01-01

    A study is presented of components of the vibration spectrum as the filtered first and second harmonics of the tooth frequency which permits information to be obtained on the physical characteristics of the vibration excitation process, and an approach to be made to comparison of models of the gearing. Regression analysis of two random processes has shown a strong dependence of the second harmonic on the first, and independence of the first from the second. The nature of change in the regression line, with change in loading moment, gives rise to the idea of a variable phase shift between the first and second harmonics.

  20. Counterfeit-resistant materials and a method and apparatus for authenticating materials

    DOEpatents

    Ramsey, J. Michael; Klatt, Leon N.

    2001-01-01

    Fluorescent dichroic fibers randomly incorporated within a media provide an improved method for authentication and counterfeiting protection. The dichroism is provided by an alignment of fluorescent molecules along the length of the fibers. The fluorescent fibers provide an authentication mechanism of varying levels of capability. The authentication signature depends on four parameters, the x,y position, the dichroism and the local environment. The availability of so many non-deterministic variables makes production of counterfeit articles (e.g., currency, credit cards, etc.) essentially impossible Counterfeit-resistant articles, an apparatus for authenticating articles, and a process for forming counterfeit-resistant media are also provided&

  1. Counterfeit-resistant materials and a method and apparatus for authenticating materials

    DOEpatents

    Ramsey, J. Michael; Klatt, Leon N.

    2000-01-01

    Fluorescent dichroic fibers randomly incorporated within a media provide an improved method for authentication and counterfeiting protection. The dichroism is provided by an alignment of fluorescent molecules along the length of the fibers. The fluorescent fibers provide an authentication mechanism of varying levels of capability. The authentication signature depends on four parameters; the x,y position, the dichroism and the local environment. The availability of so many non-deterministic variables makes production of counterfeit articles (e.g., currency, credit cards, etc.) essentially impossible. Counterfeit-resistant articles, an apparatus for authenticating articles, and a process for forming counterfeit-resistant media are also provided.

  2. General statistical considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eberhardt, L L; Gilbert, R O

    From NAEG plutonium environmental studies program meeting; Las Vegas, Nevada, USA (2 Oct 1973). The high sampling variability encountered in environmental plutonium studies along with high analytical costs makes it very important that efficient soil sampling plans be used. However, efficient sampling depends on explicit and simple statements of the objectives of the study. When there are multiple objectives it may be difficult to devise a wholly suitable sampling scheme. Sampling for long-term changes in plutonium concentration in soils may also be complex and expensive. Further attention to problems associated with compositing samples is recommended, as is the consistent usemore » of random sampling as a basic technique. (auth)« less

  3. Simulated Annealing in the Variable Landscape

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu; Kim, Chang Ju

    An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.

  4. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  5. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    PubMed

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  6. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  7. Evaluation of two-fold fully conditional specification multiple imputation for longitudinal electronic health record data

    PubMed Central

    Welch, Catherine A; Petersen, Irene; Bartlett, Jonathan W; White, Ian R; Marston, Louise; Morris, Richard W; Nazareth, Irwin; Walters, Kate; Carpenter, James

    2014-01-01

    Most implementations of multiple imputation (MI) of missing data are designed for simple rectangular data structures ignoring temporal ordering of data. Therefore, when applying MI to longitudinal data with intermittent patterns of missing data, some alternative strategies must be considered. One approach is to divide data into time blocks and implement MI independently at each block. An alternative approach is to include all time blocks in the same MI model. With increasing numbers of time blocks, this approach is likely to break down because of co-linearity and over-fitting. The new two-fold fully conditional specification (FCS) MI algorithm addresses these issues, by only conditioning on measurements, which are local in time. We describe and report the results of a novel simulation study to critically evaluate the two-fold FCS algorithm and its suitability for imputation of longitudinal electronic health records. After generating a full data set, approximately 70% of selected continuous and categorical variables were made missing completely at random in each of ten time blocks. Subsequently, we applied a simple time-to-event model. We compared efficiency of estimated coefficients from a complete records analysis, MI of data in the baseline time block and the two-fold FCS algorithm. The results show that the two-fold FCS algorithm maximises the use of data available, with the gain relative to baseline MI depending on the strength of correlations within and between variables. Using this approach also increases plausibility of the missing at random assumption by using repeated measures over time of variables whose baseline values may be missing. PMID:24782349

  8. Towards practical application of sensors for monitoring animal health; design and validation of a model to detect ketosis.

    PubMed

    Steensels, Machteld; Maltz, Ephraim; Bahr, Claudia; Berckmans, Daniel; Antler, Aharon; Halachmi, Ilan

    2017-05-01

    The objective of this study was to design and validate a mathematical model to detect post-calving ketosis. The validation was conducted in four commercial dairy farms in Israel, on a total of 706 multiparous Holstein dairy cows: 203 cows clinically diagnosed with ketosis and 503 healthy cows. A logistic binary regression model was developed, where the dependent variable is categorical (healthy/diseased) and a set of explanatory variables were measured with existing commercial sensors: rumination duration, activity and milk yield of each individual cow. In a first validation step (within-farm), the model was calibrated on the database of each farm separately. Two thirds of the sick cows and an equal number of healthy cows were randomly selected for model validation. The remaining one third of the cows, which did not participate in the model validation, were used for model calibration. In order to overcome the random selection effect, this procedure was repeated 100 times. In a second (between-farms) validation step, the model was calibrated on one farm and validated on another farm. Within-farm accuracy, ranging from 74 to 79%, was higher than between-farm accuracy, ranging from 49 to 72%, in all farms. The within-farm sensitivities ranged from 78 to 90%, and specificities ranged from 71 to 74%. The between-farms sensitivities ranged from 65 to 95%. The developed model can be improved in future research, by employing other variables that can be added; or by exploring other models to achieve greater sensitivity and specificity.

  9. Image discrimination models predict detection in fixed but not random noise

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)

    1997-01-01

    By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.

  10. Determining Directional Dependency in Causal Associations

    PubMed Central

    Pornprasertmanit, Sunthud; Little, Todd D.

    2014-01-01

    Directional dependency is a method to determine the likely causal direction of effect between two variables. This article aims to critique and improve upon the use of directional dependency as a technique to infer causal associations. We comment on several issues raised by von Eye and DeShon (2012), including: encouraging the use of the signs of skewness and excessive kurtosis of both variables, discouraging the use of D’Agostino’s K2, and encouraging the use of directional dependency to compare variables only within time points. We offer improved steps for determining directional dependency that fix the problems we note. Next, we discuss how to integrate directional dependency into longitudinal data analysis with two variables. We also examine the accuracy of directional dependency evaluations when several regression assumptions are violated. Directional dependency can suggest the direction of a relation if (a) the regression error in population is normal, (b) an unobserved explanatory variable correlates with any variables equal to or less than .2, (c) a curvilinear relation between both variables is not strong (standardized regression coefficient ≤ .2), (d) there are no bivariate outliers, and (e) both variables are continuous. PMID:24683282

  11. "Congratulations, you have been randomized into the control group!(?)": issues to consider when recruiting schools for matched-pair randomized control trials of prevention programs.

    PubMed

    Ji, Peter; DuBois, David L; Flay, Brian R; Brechling, Vanessa

    2008-03-01

    Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and match/randomize), we would recruit schools and match them prior to randomization, and in the other (match/randomize and recruitment), we would match schools and randomize them prior to recruitment. We considered how each procedure impacted the randomization process and our ability to recruit schools into the study. After implementing the selected procedure, the equivalence of both treatment and control group schools and the participating and nonparticipating schools on school demographic variables was evaluated. We decided on the recruit and match/randomize procedure because we thought it would provide the opportunity to build rapport with the schools and prepare them for the randomization process, thereby increasing the likelihood that they would accept their randomly assigned conditions. Neither the treatment and control group schools nor the participating and nonparticipating schools exhibited statistically significant differences from each other on any of the school demographic variables. Recruitment of schools prior to matching and randomization in an MP-RCT may facilitate the recruitment of schools and thus enhance both the statistical power and the representativeness of study findings. Future research would benefit from the consideration of a broader range of variables (eg, readiness to implement a comprehensive prevention program) both in matching schools and in evaluating their representativeness to nonparticipating schools.

  12. An integrated supply chain model for new products with imprecise production and supply under scenario dependent fuzzy random demand

    NASA Astrophysics Data System (ADS)

    Nagar, Lokesh; Dutta, Pankaj; Jain, Karuna

    2014-05-01

    In the present day business scenario, instant changes in market demand, different source of materials and manufacturing technologies force many companies to change their supply chain planning in order to tackle the real-world uncertainty. The purpose of this paper is to develop a multi-objective two-stage stochastic programming supply chain model that incorporates imprecise production rate and supplier capacity under scenario dependent fuzzy random demand associated with new product supply chains. The objectives are to maximise the supply chain profit, achieve desired service level and minimise financial risk. The proposed model allows simultaneous determination of optimum supply chain design, procurement and production quantities across the different plants, and trade-offs between inventory and transportation modes for both inbound and outbound logistics. Analogous to chance constraints, we have used the possibility measure to quantify the demand uncertainties and the model is solved using fuzzy linear programming approach. An illustration is presented to demonstrate the effectiveness of the proposed model. Sensitivity analysis is performed for maximisation of the supply chain profit with respect to different confidence level of service, risk and possibility measure. It is found that when one considers the service level and risk as robustness measure the variability in profit reduces.

  13. Immediate changes in widespread pressure pain sensitivity, neck pain, and cervical range of motion after cervical or thoracic thrust manipulation in patients with bilateral chronic mechanical neck pain: a randomized clinical trial.

    PubMed

    Martínez-Segura, Raquel; De-la-Llave-Rincón, Ana I; Ortega-Santiago, Ricardo; Cleland, Joshua A; Fernández-de-Las-Peñas, César

    2012-09-01

    Randomized clinical trial. To compare the effects of cervical versus thoracic thrust manipulation in patients with bilateral chronic mechanical neck pain on pressure pain sensitivity, neck pain, and cervical range of motion (CROM). Evidence suggests that spinal interventions can stimulate descending inhibitory pain pathways. To our knowledge, no study has investigated the neurophysiological effects of thoracic thrust manipulation in individuals with bilateral chronic mechanical neck pain, including widespread changes on pressure sensitivity. Ninety patients (51% female) were randomly assigned to 1 of 3 groups: cervical thrust manipulation on the right, cervical thrust manipulation on the left, or thoracic thrust manipulation. Pressure pain thresholds (PPTs) over the C5-6 zygapophyseal joint, lateral epicondyle, and tibialis anterior muscle, neck pain (11-point numeric pain rating scale), and cervical spine range of motion (CROM) were collected at baseline and 10 minutes after the intervention by an assessor blinded to the treatment allocation of the patients. Mixed-model analyses of covariance were used to examine the effects of the treatment on each outcome variable, with group as the between-subjects variable, time and side as the within-subject variables, and gender as the covariate. The primary analysis was the group-by-time interaction. No significant interactions were found with the mixed-model analyses of covariance for PPT level (C5-6, P>.210; lateral epicondyle, P>.186; tibialis anterior muscle, P>.268), neck pain intensity (P = .923), or CROM (flexion, P = .700; extension, P = .387; lateral flexion, P>.672; rotation, P>.192) as dependent variables. All groups exhibited similar changes in PPT, neck pain, and CROM (all, P<.001). Gender did not influence the main effects or the interaction effects in the analyses of the outcomes (P>.10). The results of the current randomized clinical trial suggest that cervical and thoracic thrust manipulation induce similar changes in PPT, neck pain intensity, and CROM in individuals with bilateral chronic mechanical neck pain. However, changes in PPT and CROM were small and did not surpass their respective minimal detectable change values. Further, because we did not include a control group, we cannot rule out a placebo effect of the thrust interventions on the outcomes. Therapy, level 1b.J Orthop Sports Phys Ther 2012;42(9):806-814, Epub 18 June 2012. doi:10.2519/jospt.2012.4151.

  14. Random vectors and spatial analysis by geostatistics for geotechnical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, D.S.

    1987-08-01

    Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less

  15. Engagement and Retention in Outpatient Alcoholism Treatment for Women

    PubMed Central

    Graff, Fiona S.; Morgan, Thomas J.; Epstein, Elizabeth E.; McCrady, Barbara S.; Cook, Sharon M.; Jensen, Noelle K.; Kelly, Shalonda

    2011-01-01

    Reviews of the dropout literature note significant attrition from addiction treatment. However, consistent predictors have not been identified and few studies have examined factors related to retention and engagement for women in gender-specific treatment. The current study consisted of 102 women and their partners randomized to individual or couples outpatient alcoholism treatment. Women attended more treatment sessions if they were assigned to individual treatment, older, had fewer symptoms of alcohol dependence, had more satisfying marital relationships, had spouses who drank, and had matched preference for treatment condition. Women were more engaged in treatment (i.e., completed more assigned homework) if they had fewer children at home, fewer alcohol dependence symptoms, later age of onset of alcohol diagnosis, more satisfying marital relationships, and spouses who accepted or encouraged their drinking. Results highlight important associations of treatment and relationship variables with treatment retention and engagement. PMID:19444731

  16. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  17. Evaluating Nicotine Craving, Withdrawal, and Substance Use as Mediators of Smoking Cessation in Cocaine- and Methamphetamine-Dependent Patients.

    PubMed

    Magee, Joshua C; Lewis, Daniel F; Winhusen, Theresa

    2016-05-01

    Smoking is highly prevalent in substance dependence, but smoking-cessation treatment (SCT) is more challenging in this population. To increase the success of smoking cessation services, it is important to understand potential therapeutic targets like nicotine craving that have meaningful but highly variable relationships with smoking outcomes. This study characterized the presence, magnitude, and specificity of nicotine craving as a mediator of the relationship between SCT and smoking abstinence in the context of stimulant-dependence treatment. This study was a secondary analysis of a randomized, 10-week trial conducted at 12 outpatient SUD treatment programs. Adults with cocaine and/or methamphetamine dependence (N = 538) were randomized to SUD treatment as usual (TAU) or TAU+SCT. Participants reported nicotine craving, nicotine withdrawal symptoms, and substance use in the week following a uniform quit attempt of the TAU+SCT group, and self-reported smoking 7-day point prevalence abstinence (verified by carbon monoxide) at end-of-treatment. Bootstrapped regression models indicated that, as expected, nicotine craving following a quit attempt mediated the relationship between SCT and end-of-treatment smoking point prevalence abstinence (mediation effect = 0.09, 95% CI = 0.04% to 0.14%, P < .05, 14% of total effect). Nicotine withdrawal symptoms and substance use were not significant mediators (Ps > .05, <1% of total effect). This pattern held for separate examinations of cocaine and methamphetamine dependence. Nicotine craving accounts for a small but meaningful portion of the relationship between smoking-cessation treatment and smoking abstinence during SUD treatment. Nicotine craving following a quit attempt may be a useful therapeutic target for increasing the effectiveness of smoking-cessation treatment in substance dependence. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  19. Patient satisfaction and willingness to return to the provider among women undergoing gynecological surgery.

    PubMed

    Schoenfelder, Tonio; Schaal, Tom; Klewer, Jörg; Kugler, Joachim

    2014-10-01

    To identify factors associated with 'patient satisfaction' and 'willingness to return to the provider' in gynecology and to assess similarities as well as differences between the two concepts. Study data were obtained from 968 randomly selected gynecology patients discharged from 22 hospitals who responded to a mailed survey. The validated instrument consisted of 37 items and assessed medical and service aspects of care, patient and visit characteristics. The dependent variables consisted of ratings of willingness to return to the provider and overall satisfaction. Bivariate and multivariate techniques were used to reveal relationships between indicators and both dependent variables. The multivariate analyses identified individualized medical care, kindness of medical practitioners, treatment outcome and organization of discharge as the most consistent predictors of the patients' likelihood to return and overall satisfaction. Differences between both concepts pertained to the significance of service variables (cleanliness and quality of food) for patient satisfaction and visit-related characteristics (length of stay and occurrence of complications) for willingness to return. Study findings suggest that patient satisfaction and willingness to return to the provider do not reflect the same concepts. Although service aspects such as quality of food influence satisfaction ratings, they do not increase the likelihood that patients choose the same hospital in case of another treatment. Communication between patients and medical practitioners is highly important. Revealed predictors of both concepts are alterable by healthcare professionals and should be focused on to enhance patient satisfaction and to increase the probability patients return to their provider.

  20. Modeling the response of a standard accretion disc to stochastic viscous fluctuations

    NASA Astrophysics Data System (ADS)

    Ahmad, Naveel; Misra, Ranjeev; Iqbal, Naseer; Maqbool, Bari; Hamid, Mubashir

    2018-01-01

    The observed variability of X-ray binaries over a wide range of time-scales can be understood in the framework of a stochastic propagation model, where viscous fluctuations at different radii induce accretion rate variability that propagate inwards to the X-ray producing region. The scenario successfully explains the power spectra, the linear rms-flux relation as well as the time-lag between different energy photons. The predictions of this model have been obtained using approximate analytical solutions or empirically motivated models which take into account the effect of these propagating variability on the radiative process of complex accretion flows. Here, we study the variation of the accretion rate due to such viscous fluctuations using a hydro-dynamical code for the standard geometrically thin, gas pressure dominated α-disc with a zero torque boundary condition. Our results confirm earlier findings that the time-lag between a perturbation and the resultant inner accretion rate variation depends on the frequency (or time-period) of the perturbation. Here we have quantified that the time-lag tlag ∝f-0.54 , for time-periods less than the viscous time-scale of the perturbation radius and is nearly constant otherwise. This, coupled with radiative process would produce the observed frequency dependent time-lag between different energy bands. We also confirm that if there are random Gaussian fluctuations of the α-parameter at different radii, the resultant inner accretion rate has a power spectrum which is a power-law.

  1. Randomized trial of intermittent or continuous amnioinfusion for variable decelerations.

    PubMed

    Rinehart, B K; Terrone, D A; Barrow, J H; Isler, C M; Barrilleaux, P S; Roberts, W E

    2000-10-01

    To determine whether continuous or intermittent bolus amnioinfusion is more effective in relieving variable decelerations. Patients with repetitive variable decelerations were randomized to an intermittent bolus or continuous amnioinfusion. The intermittent bolus infusion group received boluses of 500 mL of normal saline, each over 30 minutes, with boluses repeated if variable decelerations recurred. The continuous infusion group received a bolus infusion of 500 mL of normal saline over 30 minutes and then 3 mL per minute until delivery occurred. The ability of the amnioinfusion to abolish variable decelerations was analyzed, as were maternal demographic and pregnancy outcome variables. Power analysis indicated that 64 patients would be required. Thirty-five patients were randomized to intermittent infusion and 30 to continuous infusion. There were no differences between groups in terms of maternal demographics, gestational age, delivery mode, neonatal outcome, median time to resolution of variable decelerations, or the number of times variable decelerations recurred. The median volume infused in the intermittent infusion group (500 mL) was significantly less than that in the continuous infusion group (905 mL, P =.003). Intermittent bolus amnioinfusion is as effective as continuous infusion in relieving variable decelerations in labor. Further investigation is necessary to determine whether either of these techniques is associated with increased occurrence of rare complications such as cord prolapse or uterine rupture.

  2. Naltrexone and Cognitive Behavioral Therapy for the Treatment of Alcohol Dependence

    PubMed Central

    Baros, AM; Latham, PK; Anton, RF

    2008-01-01

    Background Sex differences in regards to pharmacotherapy for alcoholism is a topic of concern following publications suggesting naltrexone, one of the longest approved treatments of alcoholism, is not as effective in women as in men. This study was conducted by combining two randomized placebo controlled clinical trials utilizing similar methodologies and personnel in which the data was amalgamated to evaluate sex effects in a reasonable sized sample. Methods 211 alcoholics (57 female; 154 male) were randomized to the naltrexone/CBT or placebo/CBT arm of the two clinical trials analyzed. Baseline variables were examined for differences between sex and treatment groups via analysis of variance (ANOVA) for continuous variable or chi-square test for categorical variables. All initial outcome analysis was conducted under an intent-to-treat analysis plan. Effect sizes for naltrexone over placebo were determined by Cohen’s D (d). Results The effect size of naltrexone over placebo for the following outcome variables was similar in men and women (%days abstinent (PDA) d=0.36, %heavy drinking days (PHDD) d=0.36 and total standard drinks (TSD) d=0.36). Only for men were the differences significant secondary to the larger sample size (PDA p=0.03; PHDD p=0.03; TSD p=0.04). There were a few variables (GGT at wk-12 change from baseline to week-12: men d=0.36, p=0.05; women d=0.20, p=0.45 and drinks per drinking day: men d=0.36, p=0.05; women d=0.28, p=0.34) where the naltrexone effect size for men was greater than women. In women, naltrexone tended to increase continuous abstinent days before a first drink (women d-0.46, p=0.09; men d=0.00, p=0.44). Conclusions The effect size of naltrexone over placebo appeared similar in women and men in our hands suggesting the findings of sex differences in naltrexone response might have to do with sample size and/or endpoint drinking variables rather than any inherent pharmacological or biological differences in response. PMID:18336635

  3. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  4. Random diffusivity from stochastic equations: comparison of two models for Brownian yet non-Gaussian diffusion

    NASA Astrophysics Data System (ADS)

    Sposini, Vittoria; Chechkin, Aleksei V.; Seno, Flavio; Pagnini, Gianni; Metzler, Ralf

    2018-04-01

    A considerable number of systems have recently been reported in which Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential (Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time-dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.

  5. Music therapy improves the mood of patients undergoing hematopoietic stem cells transplantation (controlled randomized study).

    PubMed

    Dóro, Carlos Antonio; Neto, José Zanis; Cunha, Rosemyriam; Dóro, Maribel Pelaez

    2017-03-01

    The allogeneic hematopoietic stem cell transplantation (allo-HSCT) is a therapeutic medical treatment for various neoplastic hematologic, congenital, genetic, or acquired disorders. In this procedure which combines high-dose chemotherapy and/or radiotherapy and has a high degree of cytotoxicity, the patient experiences solitary confinement, which causes psychological distress, pain, anxiety, mood disorders and can lead him/her to depression. Music therapy was applied with the purpose of decreasing this social confinement. This is a randomized controlled trial. Patients (n = 100) were selected randomly. Patients (n = 50) were selected for the Experimental Music Therapy Group (EMG) and n = 50 for the control group (CG) who received the standard treatment. The intervention of live music was applied using music therapy techniques. Assessment and quantification were made using the visual analog scale (VAS). The dependent variables were pain, anxiety, and mood of patients. The Mann-Whitney test (p < 0.05) applied was considered statistically significant when comparing the groups, improving mood significantly (EMG). Music therapy proved to be a strong ally in the treatment of patients undergoing allo-HSCT, providing bio-psychosocial welfare.

  6. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  7. Variable-length analog of Stavskaya process: A new example of misleading simulation

    NASA Astrophysics Data System (ADS)

    Ramos, A. D.; Silva, F. S. G.; Sousa, C. S.; Toom, A.

    2017-05-01

    This article presents a new example intended to showcase limitations of computer simulations in the study of random processes with local interaction. For this purpose, we examine a new version of the well-known Stavskaya process, which is a discrete-time analog of the well-known contact processes. Like the bulk of random processes studied till now, the Stavskaya process is constant-length, that is, its components do not appear or disappear in the course of its functioning. The process, which we study here and call Variable Stavskaya, VS, is similar to Stavskaya; it is discrete-time; its states are bi-infinite sequences, whose terms take only two values (denoted here as "minus" and "plus"), and the measure concentrated in the configuration "all pluses" is invariant. However, it is a variable length, which means that its components, also called particles, may appear and disappear under its action. The operator VS is a composition of the following two operators. The first operator, called "birth," depends on a real parameter β; it creates a new component in the state "plus" between every two neighboring components with probability β independently from what happens at other places. The second operator, called "murder," depends on a real parameter α and acts in the following way: whenever a plus is a left neighbor of a minus, this plus disappears (as if murdered by that minus which is its right neighbor) with probability α independently from what happens to other particles. We prove for any α <1 and any β >0 and any initial measure μ that the sequence μ (𝖵𝖲)t (the result of t iterative applications of VS to μ) tends to the measure δ⊕ (concentrated in "all pluses") as t →∞ . Such a behavior is often called ergodic. However, the Monte Carlo simulations and mean-field approximations, which we performed, behaved as if μ (𝖵𝖲)t tended to δ⊕ much slower for some α ,β ,μ than for some others. Based on these numerical results, we conjecture that 𝖵𝖲 has phases, but not in that simple sense as the classical Stavskaya process.

  8. Small-world bias of correlation networks: From brain to climate

    NASA Astrophysics Data System (ADS)

    Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan

    2017-03-01

    Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.

  9. Small-world bias of correlation networks: From brain to climate.

    PubMed

    Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan

    2017-03-01

    Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.

  10. Random walks with shape prior for cochlea segmentation in ex vivo μCT.

    PubMed

    Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel

    2016-09-01

    Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.

  11. Placement matching of alcohol-dependent patients based on a standardized intake assessment: rationale and design of a randomized controlled trial.

    PubMed

    Buchholz, Angela; Friedrichs, Anke; Berner, Michael; König, Hans-Helmut; Konnopka, Alexander; Kraus, Ludwig; Kriston, Levente; Küfner, Heinrich; Piontek, Daniela; Rist, Fred; Röhrig, Jeanette

    2014-10-14

    Despite considerable research on substance-abuse placement matching, evidence is still inconclusive. The aims of this exploratory trial are to evaluate (a) the effects of following matching guidelines on health-care costs and heavy drinking, and (b) factors affecting the implementation of matching guidelines in the treatment of alcohol-dependent patients. A total of 286 alcohol-dependent patients entering one of four participating detoxification units and having no arrangements for further treatment will be recruited. During the first week of treatment, all patients will be administered Measurements in the Addictions for Triage and Evaluation (MATE), European Quality of Life-Five Dimensions health status questionnaire (EQ-5D), and the Client Socio--Demographic and Service Receipt Inventory-European Version (CSSRI-EU). Patients who are randomly allocated to the intervention group will receive feedback regarding their assessment results, including clear recommendations for subsequent treatment. Patients of the control group will receive treatment as usual and, if requested, global feedback regarding their assessment results, but no recommendations for subsequent treatment. At discharge, treatment outcome and referral decisions will be recorded. Six months after discharge, patients will be administered MATE-Outcome, EQ-5D, and CSSRI-EU during a telephone interview. This trial will provide evidence on the effects and costs of using placement-matching guidelines based on a standardized assessment with structured feedback in the treatment of alcohol-dependent patients. A process evaluation will be conducted to facilitate better understanding of the relationship between the use of guidelines, outcomes, and potential mediating variables. German Clinical Trials Register DRKS00005035. Registered 03 June 2013.

  12. Electrical Switching of Perovskite Thin-Film Resistors

    NASA Technical Reports Server (NTRS)

    Liu, Shangqing; Wu, Juan; Ignatiev, Alex

    2010-01-01

    Electronic devices that exploit electrical switching of physical properties of thin films of perovskite materials (especially colossal magnetoresistive materials) have been invented. Unlike some related prior devices, these devices function at room temperature and do not depend on externally applied magnetic fields. Devices of this type can be designed to function as sensors (exhibiting varying electrical resistance in response to varying temperature, magnetic field, electric field, and/or mechanical pressure) and as elements of electronic memories. The underlying principle is that the application of one or more short electrical pulse(s) can induce a reversible, irreversible, or partly reversible change in the electrical, thermal, mechanical, and magnetic properties of a thin perovskite film. The energy in the pulse must be large enough to induce the desired change but not so large as to destroy the film. Depending on the requirements of a specific application, the pulse(s) can have any of a large variety of waveforms (e.g., square, triangular, or sine) and be of positive, negative, or alternating polarity. In some applications, it could be necessary to use multiple pulses to induce successive incremental physical changes. In one class of applications, electrical pulses of suitable shapes, sizes, and polarities are applied to vary the detection sensitivities of sensors. Another class of applications arises in electronic circuits in which certain resistance values are required to be variable: Incorporating the affected resistors into devices of the present type makes it possible to control their resistances electrically over wide ranges, and the lifetimes of electrically variable resistors exceed those of conventional mechanically variable resistors. Another and potentially the most important class of applications is that of resistance-based nonvolatile-memory devices, such as a resistance random access memory (RRAM) described in the immediately following article, Electrically Variable Resistive Memory Devices (MFS-32511-1).

  13. Methodological Overview of an African American Couple-Based HIV/STD Prevention Trial

    PubMed Central

    2010-01-01

    Objective To provide an overview of the NIMH Multisite HIV/STD Prevention Trial for African American Couples conducted in four urban areas: Atlanta, Los Angeles, New York, and Philadelphia. The rationale, study design methods, proposed data analyses, and study management are described. Design This is a two arm randomized Trial, implementing a modified randomized block design, to evaluate the efficacy of a couples based intervention designed for HIV serodiscordant African American couples. Methods The study phases consisted of formative work, pilot studies, and a randomized clinical trial. The sample is 535 HIV serodiscordant heterosexual African American couples. There are two theoretically derived behavioral interventions with eight group and individual sessions: the Eban HIV/STD Risk Reduction Intervention (treatment) versus the Eban Health Promotion Intervention (control). The treatment intervention was couples based and focused on HIV/STD risk reduction while the control was individual based and focused on health promotion. The two study conditions were structurally similar in length and types of activities. At baseline, participants completed an Audio Computer-assisted Self Interview (ACASI) interview as well as interviewer-administered questionnaire, and provided biological specimens to assess for STDs. Similar follow-up assessments were conducted immediately after the intervention, at 6 months, and at 12 months. Results The Trial results will be analyzed across the four sites by randomization assignment. Generalized estimating equations (GEE) and mixed effects modeling (MEM) are planned to test: (1) the effects of the intervention on STD incidence and condom use as well as on mediator variables of these outcomes, and (2) whether the effects of the intervention differ depending on key moderator variables (e.g., gender of the HIV-seropositive partners, length of relationship, psychological distress, sexual abuse history, and substance abuse history). Conclusions The lessons learned from the design and conduct of this clinical trial provide guidelines for future couples based clinical trials in HIV/STD risk reduction and can be generalized to other couples based behavioral interventions. PMID:18724188

  14. Do Evidence-Based Youth Psychotherapies Outperform Usual Clinical Care? A Multilevel Meta-Analysis

    PubMed Central

    Weisz, John R.; Kuppens, Sofie; Eckshtain, Dikla; Ugueto, Ana M.; Hawley, Kristin M.; Jensen-Doss, Amanda

    2013-01-01

    Context Research across four decades has produced numerous empirically-tested evidence-based psychotherapies (EBPs) for youth psychopathology, developed to improve upon usual clinical interventions. Advocates argue that these should replace usual care; but do the EBPs produce better outcomes than usual care? Objective This question was addressed in a meta-analysis of 52 randomized trials directly comparing EBPs to usual care. Analyses assessed the overall effect of EBPs vs. usual care, and candidate moderators; multilevel analysis was used to address the dependency among effect sizes that is common but typically unaddressed in psychotherapy syntheses. Data Sources The PubMed, PsychINFO, and Dissertation Abstracts International databases were searched for studies from January 1, 1960 – December 31, 2010. Study Selection 507 randomized youth psychotherapy trials were identified. Of these, the 52 studies that compared EBPs to usual care were included in the meta-analysis. Data Extraction Sixteen variables (participant, treatment, and study characteristics) were extracted from each study, and effect sizes were calculated for all EBP versus usual care comparisons. Data Synthesis EBPs outperformed usual care. Mean effect size was 0.29; the probability was 58% that a randomly selected youth receiving an EBP would be better off after treatment than a randomly selected youth receiving usual care. Three variables moderated treatment benefit: Effect sizes decreased for studies conducted outside North America, for studies in which all participants were impaired enough to qualify for diagnoses, and for outcomes reported by people other than the youths and parents in therapy. For certain key groups (e.g., studies using clinically referred samples and diagnosed samples), significant EBP effects were not demonstrated. Conclusions EBPs outperformed usual care, but the EBP advantage was modest and moderated by youth, location, and assessment characteristics. There is room for improvement in EBPs, both in the magnitude and range of their benefit, relative to usual care. PMID:23754332

  15. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks

    PubMed Central

    Bi, Zedong; Zhou, Changsong

    2016-01-01

    Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations) influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP) and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded), by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF) neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy). PMID:27555816

  16. Written object naming, spelling to dictation, and immediate copying: Different tasks, different pathways?

    PubMed

    Bonin, Patrick; Méot, Alain; Lagarrigue, Aurélie; Roux, Sébastien

    2015-01-01

    We report an investigation of cross-task comparisons of handwritten latencies in written object naming, spelling to dictation, and immediate copying. In three separate sessions, adults had to write down a list of concrete nouns from their corresponding pictures (written naming), from their spoken (spelling to dictation) and from their visual presentation (immediate copying). Linear mixed models without random slopes were performed on the latencies in order to study and compare within-task fixed effects. By-participants random slopes were then included to investigate individual differences within and across tasks. Overall, the findings suggest that written naming, spelling to dictation, and copying all involve a lexical pathway, but that written naming relies on this pathway more than the other two tasks do. Only spelling to dictation strongly involves a nonlexical pathway. Finally, the analyses performed at the level of participants indicate that, depending on the type of task, the slower participants are more or less influenced by certain psycholinguistic variables.

  17. Measuring multivariate association and beyond

    PubMed Central

    Josse, Julie; Holmes, Susan

    2017-01-01

    Simple correlation coefficients between two variables have been generalized to measure association between two matrices in many ways. Coefficients such as the RV coefficient, the distance covariance (dCov) coefficient and kernel based coefficients are being used by different research communities. Scientists use these coefficients to test whether two random vectors are linked. Once it has been ascertained that there is such association through testing, then a next step, often ignored, is to explore and uncover the association’s underlying patterns. This article provides a survey of various measures of dependence between random vectors and tests of independence and emphasizes the connections and differences between the various approaches. After providing definitions of the coefficients and associated tests, we present the recent improvements that enhance their statistical properties and ease of interpretation. We summarize multi-table approaches and provide scenarii where the indices can provide useful summaries of heterogeneous multi-block data. We illustrate these different strategies on several examples of real data and suggest directions for future research. PMID:29081877

  18. Combining numerical simulations with time-domain random walk for pathogen risk assessment in groundwater

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Molin, S.

    2012-02-01

    We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.

  19. Reliability analysis of structures under periodic proof tests in service

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.

  20. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  1. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  2. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  3. Massage Impact on Pain in Opioid-dependent Patients in Substance Use Treatment

    PubMed Central

    Wiest, Katharina L.; Asphaug, Victoria J.; Carr, Kathryn E.; Gowen, Emily A.; Hartnett, Timothy T.

    2015-01-01

    Background: Chronic pain is a common cause of health care utilization and high levels of pain are pronounced in individuals engaged in methadone maintenance treatment. Although massage has been demonstrated to alleviate chronic pain symptoms, its use as an adjunctive therapy to modify pain during opioid-replacement treatment is absent from the literature. Purpose: To consider the efficacy of Swedish massage in reducing pain in opioid-dependent patients with chronic pain receiving methadone treatment. Setting: Trial was conducted at a nonprofit methadone treatment center serving low-income patients. Research Design: A randomized clinical trial with randomized to either 1) massage plus treatment-as-usual (TAU) (n = 27) or 2) TAU (n = 24). Durability of treatment effect was evaluated at Week 12. Intervention: Eight weekly 50-minute Swedish massage sessions plus TAU or TAU alone. Main Outcome Measures: Pain, anxiety, depression, physical functioning, decreased substance use, and improvement in treatment engagement. Results: Randomized participants were comparable at Baseline for demographic, pain, physical, and emotional variables. Massage group reported improved pain scores; worst pain had a clinically significant 2-point improvement while the other pain scores did not. Overall improvements were not observed in treatment engagement or levels of anxiety, depression, or physical functioning. A subgroup of the participants, who felt they could be pain-free, consistently reported improvements in pain from Baseline to Week 8, and this was most pronounced and clinically significant in the massage group. Conclusions: These preliminary findings do not support an overall clinically significant positive effect of Swedish massage on reduction in pain ratings or improvement in anxiety, depression, or treatment engagement in a substance-using, opioid-dependent population with chronic pain. Future nonpharmacologic pain research in marginalized substance-using populations may wish to consider some of the challenges and limitations faced in this project. PMID:25780471

  4. Geometrical effects on the electron residence time in semiconductor nano-particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koochi, Hakimeh; Ebrahimi, Fatemeh, E-mail: f-ebrahimi@birjand.ac.ir; Solar Energy Research Group, University of Birjand, Birjand

    2014-09-07

    We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ{sub r} in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r{sup 2} model) or through the whole particle (r{sup 3} model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW)more » simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ{sub r}. It has been observed that by increasing the coordination number n, the average value of electron residence time, τ{sup ¯}{sub r} rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ{sup ¯}{sub r} is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ{sup ¯}{sub r}. Our simulations indicate that for volume distribution of traps, τ{sup ¯}{sub r} scales as d{sup 2}. For a surface distribution of traps τ{sup ¯}{sub r} increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.« less

  5. Information content of MOPITT CO profile retrievals: Temporal and geographical variability

    NASA Astrophysics Data System (ADS)

    Deeter, M. N.; Edwards, D. P.; Gille, J. C.; Worden, H. M.

    2015-12-01

    Satellite measurements of tropospheric carbon monoxide (CO) enable a wide array of applications including studies of air quality and pollution transport. The MOPITT (Measurements of Pollution in the Troposphere) instrument on the Earth Observing System Terra platform has been measuring CO concentrations globally since March 2000. As indicated by the Degrees of Freedom for Signal (DFS), the standard metric for trace-gas retrieval information content, MOPITT retrieval performance varies over a wide range. We show that both instrumental and geophysical effects yield significant geographical and temporal variability in MOPITT DFS values. Instrumental radiance uncertainties, which describe random errors (or "noise") in the calibrated radiances, vary over long time scales (e.g., months to years) and vary between the four detector elements of MOPITT's linear detector array. MOPITT retrieval performance depends on several factors including thermal contrast, fine-scale variability of surface properties, and CO loading. The relative importance of these various effects is highly variable, as demonstrated by analyses of monthly mean DFS values for the United States and the Amazon Basin. An understanding of the geographical and temporal variability of MOPITT retrieval performance is potentially valuable to data users seeking to limit the influence of the a priori through data filtering. To illustrate, it is demonstrated that calculated regional-average CO mixing ratios may be improved by excluding observations from a subset of pixels in MOPITT's linear detector array.

  6. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  7. Heart rate variability biofeedback in patients with alcohol dependence: a randomized controlled study

    PubMed Central

    Penzlin, Ana Isabel; Siepmann, Timo; Illigens, Ben Min-Woo; Weidner, Kerstin; Siepmann, Martin

    2015-01-01

    Background and objective In patients with alcohol dependence, ethyl-toxic damage of vasomotor and cardiac autonomic nerve fibers leads to autonomic imbalance with neurovascular and cardiac dysfunction, the latter resulting in reduced heart rate variability (HRV). Autonomic imbalance is linked to increased craving and cardiovascular mortality. In this study, we sought to assess the effects of HRV biofeedback training on HRV, vasomotor function, craving, and anxiety. Methods We conducted a randomized controlled study in 48 patients (14 females, ages 25–59 years) undergoing inpatient rehabilitation treatment. In the treatment group, patients (n=24) attended six sessions of HRV biofeedback over 2 weeks in addition to standard rehabilitative care, whereas, in the control group, subjects received standard care only. Psychometric testing for craving (Obsessive Compulsive Drinking Scale), anxiety (Symptom Checklist-90-Revised), HRV assessment using coefficient of variation of R-R intervals (CVNN) analysis, and vasomotor function assessment using laser Doppler flowmetry were performed at baseline, immediately after completion of treatment or control period, and 3 and 6 weeks afterward (follow-ups 1 and 2). Results Psychometric testing showed decreased craving in the biofeedback group immediately postintervention (OCDS scores: 8.6±7.9 post-biofeedback versus 13.7±11.0 baseline [mean ± standard deviation], P<0.05), whereas craving was unchanged at this time point in the control group. Anxiety was reduced at follow-ups 1 and 2 post-biofeedback, but was unchanged in the control group (P<0.05). Following biofeedback, CVNN tended to be increased (10.3%±2.8% post-biofeedback, 10.1%±3.5% follow-up 1, 10.1%±2.9% follow-up 2 versus 9.7%±3.6% baseline; P=not significant). There was no such trend in the control group. Vasomotor function assessed using the mean duration to 50% vasoconstriction of cutaneous vessels after deep inspiration was improved following biofeedback immediately postintervention and was unchanged in the control group (P<0.05). Conclusion Our data indicate that HRV biofeedback might be useful to decrease anxiety, increase HRV, and improve vasomotor function in patients with alcohol dependence when complementing standard rehabilitative inpatient care. PMID:26557753

  8. Infinite Systems of Interacting Chains with Memory of Variable Length—A Stochastic Model for Biological Neural Nets

    NASA Astrophysics Data System (ADS)

    Galves, A.; Löcherbach, E.

    2013-06-01

    We consider a new class of non Markovian processes with a countable number of interacting components. At each time unit, each component can take two values, indicating if it has a spike or not at this precise moment. The system evolves as follows. For each component, the probability of having a spike at the next time unit depends on the entire time evolution of the system after the last spike time of the component. This class of systems extends in a non trivial way both the interacting particle systems, which are Markovian (Spitzer in Adv. Math. 5:246-290, 1970) and the stochastic chains with memory of variable length which have finite state space (Rissanen in IEEE Trans. Inf. Theory 29(5):656-664, 1983). These features make it suitable to describe the time evolution of biological neural systems. We construct a stationary version of the process by using a probabilistic tool which is a Kalikow-type decomposition either in random environment or in space-time. This construction implies uniqueness of the stationary process. Finally we consider the case where the interactions between components are given by a critical directed Erdös-Rényi-type random graph with a large but finite number of components. In this framework we obtain an explicit upper-bound for the correlation between successive inter-spike intervals which is compatible with previous empirical findings.

  9. Bounds for Asian basket options

    NASA Astrophysics Data System (ADS)

    Deelstra, Griselda; Diallo, Ibrahima; Vanmaele, Michèle

    2008-09-01

    In this paper we propose pricing bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework. We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151-168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3-33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55-57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51-90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1-52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.

  10. The influence of spatially and temporally varying oceanographic conditions on meroplanktonic metapopulations

    NASA Astrophysics Data System (ADS)

    Botsford, L. W.; Moloney, C. L.; Hastings, A.; Largier, J. L.; Powell, T. M.; Higgins, K.; Quinn, J. F.

    We synthesize the results of several modelling studies that address the influence of variability in larval transport and survival on the dynamics of marine metapopulations distributed along a coast. Two important benthic invertebrates in the California Current System (CCS), the Dungeness crab and the red sea urchin, are used as examples of the way in which physical oceanographic conditions can influence stability, synchrony and persistence of meroplanktonic metapopulations. We first explore population dynamics of subpopulations and metapopulations. Even without environmental forcing, isolated local subpopulations with density-dependence can vary on time scales roughly twice the generation time at high adult survival, shifting to annual time scales at low survivals. The high frequency behavior is not seen in models of the Dungeness crab, because of their high adult survival rates. Metapopulations with density-dependent recruitment and deterministic larval dispersal fluctuate in an asynchronous fashion. Along the coast, abundance varies on spatial scales which increase with dispersal distance. Coastwide, synchronous, random environmental variability tends to synchronize these metapopulations. Climate change could cause a long-term increase or decrease in mean larval survival, which in this model leads to greater synchrony or extinction respectively. Spatially managed metapopulations of red sea urchins go extinct when distances between harvest refugia become greater than the scale of larval dispersal. All assessments of population dynamics indicate that metapopulation behavior in general dependes critically on the temporal and spatial nature of larval dispersal, which is largely determined by physical oceanographic conditions. We therfore explore physical influences on larval dispersal patterns. Observed trends in temperature and salinity applied to laboratory-determined responses indicate that natural variability in temperature and salinity can lead to variability in larval development period on interannual (50%), intra-annual (20%) and latitudinal (200%) scales. Variability in development period significantly influences larval survival and, thus, net transport. Larval drifters that undertake diel vertical migration in a primitive equation model of coastal circulation (SPEM) demonstrate the importance of vertical migration in determining horizontal transport. Empirically derived estimates of the effects of wind forcing on larval transport of vertically migrating larvae (wind drift when near the surface and Ekman transport below the surface) match cross-shelf distributions in 4 years of existing larval data. We use a one-dimensional advection-diffusion model, which includes intra-annual timing of cross-shelf flows in the CCS, to explore the combined effects on settlement: (1) temperature- and salinity-dependent development and survival rates and (2) possible horizontal transport due to vertical migration of crab larvae. Natural variability in temperature, wind forcing, and the timing of the spring transition can cause the observed variability in recruitment. We conclude that understanding the dynamics of coastally distributed metapopulations in response to physically-induced variability in larval dispersal will be a critical step in assessing the effects of climate change on marine populations.

  11. Effectiveness of a primary care-based intervention to reduce sitting time in overweight and obese patients (SEDESTACTIV): a randomized controlled trial; rationale and study design

    PubMed Central

    2014-01-01

    Background There is growing evidence suggesting that prolonged sitting has negative effects on people’s weight, chronic diseases and mortality. Interventions to reduce sedentary time can be an effective strategy to increase daily energy expenditure. The purpose of this study is to evaluate the effectiveness of a six-month primary care intervention to reduce daily of sitting time in overweight and mild obese sedentary patients. Method/Design The study is a randomized controlled trial (RCT). Professionals from thirteen primary health care centers (PHC) will randomly invite to participate mild obese or overweight patients of both gender, aged between 25 and 65 years old, who spend 6 hours at least daily sitting. A total of 232 subjects will be randomly allocated to an intervention (IG) and control group (CG) (116 individuals each group). In addition, 50 subjects with fibromyalgia will be included. Primary outcome is: (1) sitting time using the activPAL device and the Marshall questionnaire. The following parameters will be also assessed: (2) sitting time in work place (Occupational Sitting and Physical Activity Questionnaire), (3) health-related quality of life (EQ-5D), (4) evolution of stage of change (Prochaska and DiClemente's Stages of Change Model), (5) physical inactivity (catalan version of Brief Physical Activity Assessment Tool), (6) number of steps walked (pedometer and activPAL), (7) control based on analysis (triglycerides, total cholesterol, HDL, LDL, glycemia and, glycated haemoglobin in diabetic patients) and (8) blood pressure and anthropometric variables. All parameters will be assessed pre and post intervention and there will be a follow up three, six and twelve months after the intervention. A descriptive analysis of all variables and a multivariate analysis to assess differences among groups will be undertaken. Multivariate analysis will be carried out to assess time changes of dependent variables. All the analysis will be done under the intention to treat principle. Discussion If the SEDESTACTIV intervention shows its effectiveness in reducing sitting time, health professionals would have a low-cost intervention tool for sedentary overweight and obese patients management. Trial registration A service of the U.S. National Institutes of Health. Developed by the National Library of Medicine. ClinicalTrials.gov NCT01729936 PMID:24597534

  12. Effectiveness of a primary care-based intervention to reduce sitting time in overweight and obese patients (SEDESTACTIV): a randomized controlled trial; rationale and study design.

    PubMed

    Martín-Borràs, Carme; Giné-Garriga, Maria; Martínez, Elena; Martín-Cantera, Carlos; Puigdoménech, Elisa; Solà, Mercè; Castillo, Eva; Beltrán, Angela Ma; Puig-Ribera, Anna; Trujillo, José Manuel; Pueyo, Olga; Pueyo, Javier; Rodríguez, Beatriz; Serra-Paya, Noemí

    2014-03-05

    There is growing evidence suggesting that prolonged sitting has negative effects on people's weight, chronic diseases and mortality. Interventions to reduce sedentary time can be an effective strategy to increase daily energy expenditure. The purpose of this study is to evaluate the effectiveness of a six-month primary care intervention to reduce daily of sitting time in overweight and mild obese sedentary patients. The study is a randomized controlled trial (RCT). Professionals from thirteen primary health care centers (PHC) will randomly invite to participate mild obese or overweight patients of both gender, aged between 25 and 65 years old, who spend 6 hours at least daily sitting. A total of 232 subjects will be randomly allocated to an intervention (IG) and control group (CG) (116 individuals each group). In addition, 50 subjects with fibromyalgia will be included.Primary outcome is: (1) sitting time using the activPAL device and the Marshall questionnaire. The following parameters will be also assessed: (2) sitting time in work place (Occupational Sitting and Physical Activity Questionnaire), (3) health-related quality of life (EQ-5D), (4) evolution of stage of change (Prochaska and DiClemente's Stages of Change Model), (5) physical inactivity (catalan version of Brief Physical Activity Assessment Tool), (6) number of steps walked (pedometer and activPAL), (7) control based on analysis (triglycerides, total cholesterol, HDL, LDL, glycemia and, glycated haemoglobin in diabetic patients) and (8) blood pressure and anthropometric variables. All parameters will be assessed pre and post intervention and there will be a follow up three, six and twelve months after the intervention. A descriptive analysis of all variables and a multivariate analysis to assess differences among groups will be undertaken. Multivariate analysis will be carried out to assess time changes of dependent variables. All the analysis will be done under the intention to treat principle. If the SEDESTACTIV intervention shows its effectiveness in reducing sitting time, health professionals would have a low-cost intervention tool for sedentary overweight and obese patients management. A service of the U.S. National Institutes of Health. Developed by the National Library of Medicine. ClinicalTrials.gov NCT01729936.

  13. Transport of Charged Particles in Turbulent Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Parashar, T.; Subedi, P.; Sonsrettee, W.; Blasi, P.; Ruffolo, D. J.; Matthaeus, W. H.; Montgomery, D.; Chuychai, P.; Dmitruk, P.; Wan, M.; Chhiber, R.

    2017-12-01

    Magnetic fields permeate the Universe. They are found in planets, stars, galaxies, and the intergalactic medium. The magnetic field found in these astrophysical systems are usually chaotic, disordered, and turbulent. The investigation of the transport of cosmic rays in magnetic turbulence is a subject of considerable interest. One of the important aspects of cosmic ray transport is to understand their diffusive behavior and to calculate the diffusion coefficient in the presence of these turbulent fields. Research has most frequently concentrated on determining the diffusion coefficient in the presence of a mean magnetic field. Here, we will particularly focus on calculating diffusion coefficients of charged particles and magnetic field lines in a fully three-dimensional isotropic turbulent magnetic field with no mean field, which may be pertinent to many astrophysical situations. For charged particles in isotropic turbulence we identify different ranges of particle energy depending upon the ratio of the Larmor radius of the charged particle to the characteristic outer length scale of the turbulence. Different theoretical models are proposed to calculate the diffusion coefficient, each applicable to a distinct range of particle energies. The theoretical ideas are tested against results of detailed numerical experiments using Monte-Carlo simulations of particle propagation in stochastic magnetic fields. We also discuss two different methods of generating random magnetic field to study charged particle propagation using numerical simulation. One method is the usual way of generating random fields with a specified power law in wavenumber space, using Gaussian random variables. Turbulence, however, is non-Gaussian, with variability that comes in bursts called intermittency. We therefore devise a way to generate synthetic intermittent fields which have many properties of realistic turbulence. Possible applications of such synthetically generated intermittent fields are discussed.

  14. An analytic solution of the stochastic storage problem applicable to soil water

    USGS Publications Warehouse

    Milly, P.C.D.

    1993-01-01

    The accumulation of soil water during rainfall events and the subsequent depletion of soil water by evaporation between storms can be described, to first order, by simple accounting models. When the alternating supplies (precipitation) and demands (potential evaporation) are viewed as random variables, it follows that soil-water storage, evaporation, and runoff are also random variables. If the forcing (supply and demand) processes are stationary for a sufficiently long period of time, an asymptotic regime should eventually be reached where the probability distribution functions of storage, evaporation, and runoff are stationary and uniquely determined by the distribution functions of the forcing. Under the assumptions that the potential evaporation rate is constant, storm arrivals are Poisson-distributed, rainfall is instantaneous, and storm depth follows an exponential distribution, it is possible to derive the asymptotic distributions of storage, evaporation, and runoff analytically for a simple balance model. A particular result is that the fraction of rainfall converted to runoff is given by (1 - R−1)/(eα(1−R−1) − R−1), in which R is the ratio of mean potential evaporation to mean rainfall and a is the ratio of soil water-holding capacity to mean storm depth. The problem considered here is analogous to the well-known problem of storage in a reservoir behind a dam, for which the present work offers a new solution for reservoirs of finite capacity. A simple application of the results of this analysis suggests that random, intraseasonal fluctuations of precipitation cannot by themselves explain the observed dependence of the annual water balance on annual totals of precipitation and potential evaporation.

  15. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  16. Measuring Clinical Decision Support Influence on Evidence-Based Nursing Practice.

    PubMed

    Cortez, Susan; Dietrich, Mary S; Wells, Nancy

    2016-07-01

    To measure the effect of clinical decision support (CDS) on oncology nurse evidence-based practice (EBP).
. Longitudinal cluster-randomized design.
. Four distinctly separate oncology clinics associated with an academic medical center.
. The study sample was comprised of randomly selected data elements from the nursing documentation software. The data elements were patient-reported symptoms and the associated nurse interventions. The total sample observations were 600, derived from a baseline, posteducation, and postintervention sample of 200 each (100 in the intervention group and 100 in the control group for each sample).
. The cluster design was used to support randomization of the study intervention at the clinic level rather than the individual participant level to reduce possible diffusion of the study intervention. An elongated data collection cycle (11 weeks) controlled for temporary increases in nurse EBP related to the education or CDS intervention.
. The dependent variable was the nurse evidence-based documentation rate, calculated from the nurse-documented interventions. The independent variable was the CDS added to the nursing documentation software.
. The average EBP rate at baseline for the control and intervention groups was 27%. After education, the average EBP rate increased to 37%, and then decreased to 26% in the postintervention sample. Mixed-model linear statistical analysis revealed no significant interaction of group by sample. The CDS intervention did not result in an increase in nurse EBP.
. EBP education increased nurse EBP documentation rates significantly but only temporarily. Nurses may have used evidence in practice but may not have documented their interventions.
. More research is needed to understand the complex relationship between CDS, nursing practice, and nursing EBP intervention documentation. CDS may have a different effect on nurse EBP, physician EBP, and other medical professional EBP.

  17. Validity of a Residualized Dependent Variable after Pretest Covariance Adjustments: Still the Same Variable?

    ERIC Educational Resources Information Center

    Nimon, Kim; Henson, Robin K.

    2015-01-01

    The authors empirically examined whether the validity of a residualized dependent variable after covariance adjustment is comparable to that of the original variable of interest. When variance of a dependent variable is removed as a result of one or more covariates, the residual variance may not reflect the same meaning. Using the pretest-posttest…

  18. Effects of exercise training and a hypocaloric diet on female monozygotic twins in free-living conditions.

    PubMed

    Koenigstorfer, Joerg; Schmidt, Walter F J

    2011-10-24

    This paper aims to examine the similarities in effects of exercise training and a hypocaloric diet within overweight female monozygotic twin pairs and to assess differences in twin partners' responses depending on the timing of exercise bouts and main meals. Six previously untrained twin pairs (aged 20-37 years, body fat 35.8±6.3%) performed an identical exercise program (12 bouts endurance and 8 bouts resistance training) and took part in a nutrition counseling program for a period of 28 days. They pursued one identical goal: to lose body weight and fat. Each twin partner was randomly assigned to one of the two intervention groups: "exercise after dinner" (A) and "exercise before dinner" (B). Subjects followed a hypocaloric diet, supervised by a nutritionist, in free-living conditions. Reductions in body weight, waist and hip circumference, glucose tolerance, mean daily %fat intake, changes in morning resting energy rate and resting metabolic rate showed great variation between twin pairs, but only small variation within twin pairs. Thus, the genetic influence on the changes in most of the examined anthropometric and physiological variables was high. There was no influence of the specific timing on the dependent variables. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Electron transport and noise spectroscopy in organic magnetic tunnel junctions with PTCDA and Alq3 barriers

    NASA Astrophysics Data System (ADS)

    Martinez, Isidoro; Cascales, Juan Pedro; Hong, Jhen-Yong; Lin, Minn-Tsong; Prezioso, Mirko; Riminucci, Alberto; Dediu, Valentin A.; Aliev, Farkhad G.

    2016-10-01

    The possible influence of internal barrier dynamics on spin, charge transport and their fluctuations in organic spintronics remains poorly understood. Here we present investigation of the electron transport and low frequency noise at temperatures down to 0.3K in magnetic tunnel junctions with an organic PTCDA barriers with thickness up to 5 nm in the tunneling regime and with 200 nm thick Alq3 barrier in the hopping regime. We observed high tunneling magneto-resistance at low temperatures (15-40%) and spin dependent super-poissonian shot noise in organic magnetic tunnel junctions (OMTJs) with PTCDA. The Fano factor exceeds 1.5-2 values which could be caused by interfacial states controlled by spin dependent bunching in the tunneling events through the molecules.1 The bias dependence of the low frequency noise in OMTJs with PTCDA barriers which includes both 1/f and random telegraph noise activated at specific biases will also be discussed. On the other hand, the organic junctions with ferromagnetic electrodes and thick Alq3 barriers present sub-poissonian shot noise which depends on the temperature, indicative of variable range hopping.

  20. Integrating models that depend on variable data

    NASA Astrophysics Data System (ADS)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.

  1. Evaluating physical habitat and water chemistry data from statewide stream monitoring programs to establish least-impacted conditions in Washington State

    USGS Publications Warehouse

    Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad

    2015-01-01

    Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.

  2. Morinda citrifolia (Noni) as an Anti-Inflammatory Treatment in Women with Primary Dysmenorrhoea: A Randomised Double-Blind Placebo-Controlled Trial.

    PubMed

    Fletcher, H M; Dawkins, J; Rattray, C; Wharfe, G; Reid, M; Gordon-Strachan, G

    2013-01-01

    Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo.

  3. Morinda citrifolia (Noni) as an Anti-Inflammatory Treatment in Women with Primary Dysmenorrhoea: A Randomised Double-Blind Placebo-Controlled Trial

    PubMed Central

    Fletcher, H. M.; Dawkins, J.; Rattray, C.; Wharfe, G.; Reid, M.; Gordon-Strachan, G.

    2013-01-01

    Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo. PMID:23431314

  4. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    NASA Astrophysics Data System (ADS)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  5. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  6. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  7. Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Landini, Luigi; Santarelli, Maria Filomena

    2016-07-01

    To develop a 3D sampling strategy based on a stack of variable density spirals for compressive sensing MRI. A random sampling pattern was obtained by rotating each spiral by a random angle and by delaying for few time steps the gradient waveforms of the different interleaves. A three-dimensional (3D) variable sampling density was obtained by designing different variable density spirals for each slice encoding. The proposed approach was tested with phantom simulations up to a five-fold undersampling factor. Fully sampled 3D dataset of a human knee, and of a human brain, were obtained from a healthy volunteer. The proposed approach was tested with off-line reconstructions of the knee dataset up to a four-fold acceleration and compared with other noncoherent trajectories. The proposed approach outperformed the standard stack of spirals for various undersampling factors. The level of coherence and the reconstruction quality of the proposed approach were similar to those of other trajectories that, however, require 3D gridding for the reconstruction. The variable density randomized stack of spirals (VDR-SoS) is an easily implementable trajectory that could represent a valid sampling strategy for 3D compressive sensing MRI. It guarantees low levels of coherence without requiring 3D gridding. Magn Reson Med 76:59-69, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  8. Floodplain complexity and surface metrics: influences of scale and geomorphology

    USGS Publications Warehouse

    Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.

    2015-01-01

    Many studies of fluvial geomorphology and landscape ecology examine a single river or landscape, thus lack generality, making it difficult to develop a general understanding of the linkages between landscape patterns and larger-scale driving variables. We examined the spatial complexity of eight floodplain surfaces in widely different geographic settings and determined how patterns measured at different scales relate to different environmental drivers. Floodplain surface complexity is defined as having highly variable surface conditions that are also highly organised in space. These two components of floodplain surface complexity were measured across multiple sampling scales from LiDAR-derived DEMs. The surface character and variability of each floodplain were measured using four surface metrics; namely, standard deviation, skewness, coefficient of variation, and standard deviation of curvature from a series of moving window analyses ranging from 50 to 1000 m in radius. The spatial organisation of each floodplain surface was measured using spatial correlograms of the four surface metrics. Surface character, variability, and spatial organisation differed among the eight floodplains; and random, fragmented, highly patchy, and simple gradient spatial patterns were exhibited, depending upon the metric and window size. Differences in surface character and variability among the floodplains became statistically stronger with increasing sampling scale (window size), as did their associations with environmental variables. Sediment yield was consistently associated with differences in surface character and variability, as were flow discharge and variability at smaller sampling scales. Floodplain width was associated with differences in the spatial organization of surface conditions at smaller sampling scales, while valley slope was weakly associated with differences in spatial organisation at larger scales. A comparison of floodplain landscape patterns measured at different scales would improve our understanding of the role that different environmental variables play at different scales and in different geomorphic settings.

  9. Symmetric co-movement between Malaysia and Japan stock markets

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2017-04-01

    The copula approach is a flexible tool known to capture linear, nonlinear, symmetric and asymmetric dependence between two or more random variables. It is often used as a co-movement measure between stock market returns. The information obtained from copulas such as the level of association of financial market during normal and bullish and bearish markets phases are useful for investment strategies and risk management. However, the study of co-movement between Malaysia and Japan markets are limited, especially using copulas. Hence, we aim to investigate the dependence structure between Malaysia and Japan capital markets for the period spanning from 2000 to 2012. In this study, we showed that the bivariate normal distribution is not suitable as the bivariate distribution or to present the dependence between Malaysia and Japan markets. Instead, Gaussian or normal copula was found a good fit to represent the dependence. From our findings, it can be concluded that simple distribution fitting such as bivariate normal distribution does not suit financial time series data, whose characteristics are often leptokurtic. The nature of the data is treated by ARMA-GARCH with heavy tail distributions and these can be associated with copula functions. Regarding the dependence structure between Malaysia and Japan markets, the findings suggest that both markets co-move concurrently during normal periods.

  10. An improved non-Markovian degradation model with long-term dependency and item-to-item uncertainty

    NASA Astrophysics Data System (ADS)

    Xi, Xiaopeng; Chen, Maoyin; Zhang, Hanwen; Zhou, Donghua

    2018-05-01

    It is widely noted in the literature that the degradation should be simplified into a memoryless Markovian process for the purpose of predicting the remaining useful life (RUL). However, there actually exists the long-term dependency in the degradation processes of some industrial systems, including electromechanical equipments, oil tankers, and large blast furnaces. This implies the new degradation state depends not only on the current state, but also on the historical states. Such dynamic systems cannot be accurately described by traditional Markovian models. Here we present an improved non-Markovian degradation model with both the long-term dependency and the item-to-item uncertainty. As a typical non-stationary process with dependent increments, fractional Brownian motion (FBM) is utilized to simulate the fractal diffusion of practical degradations. The uncertainty among multiple items can be represented by a random variable of the drift. Based on this model, the unknown parameters are estimated through the maximum likelihood (ML) algorithm, while a closed-form solution to the RUL distribution is further derived using a weak convergence theorem. The practicability of the proposed model is fully verified by two real-world examples. The results demonstrate that the proposed method can effectively reduce the prediction error.

  11. Random Item IRT Models

    ERIC Educational Resources Information Center

    De Boeck, Paul

    2008-01-01

    It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…

  12. Is the Non-Dipole Magnetic Field Random?

    NASA Technical Reports Server (NTRS)

    Walker, Andrew D.; Backus, George E.

    1996-01-01

    Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.

  13. Effects of Assuming Independent Component Failure Times, If They Are Actually Dependent, In a Series System

    DTIC Science & Technology

    1988-05-31

    non-negative random variables with system life Y = r ( TI, ..., rp ) and failure pattern kT) - [, ifY =- Td , I I and Y<Ty, j* (2.2) S=a , otherwise...Moeschberger - 3a. TYPE OF REPORT 1i3b. TIME COVERED 114. DATE OF REPORT (Year, Month, Day) S. PAGE COUNT Final I FROM9 -1- 8 2 Td .2-3l--8 7 IMay 31...T1iY > Td )since average concordanceeover the range Y > Tiis 0. When i - I arid l=-0, then Ti -Xi <.04= Ti, Xi < V ,Yi <Xi. Here if Ti Y1&<T, the

  14. Are glucose levels, glucose variability and autonomic control influenced by inspiratory muscle exercise in patients with type 2 diabetes? Study protocol for a randomized controlled trial.

    PubMed

    Schein, Aso; Correa, Aps; Casali, Karina Rabello; Schaan, Beatriz D

    2016-01-20

    Physical exercise reduces glucose levels and glucose variability in patients with type 2 diabetes. Acute inspiratory muscle exercise has been shown to reduce these parameters in a small group of patients with type 2 diabetes, but these results have yet to be confirmed in a well-designed study. The aim of this study is to investigate the effect of acute inspiratory muscle exercise on glucose levels, glucose variability, and cardiovascular autonomic function in patients with type 2 diabetes. This study will use a randomized clinical trial crossover design. A total of 14 subjects will be recruited and randomly allocated to two groups to perform acute inspiratory muscle loading at 2 % of maximal inspiratory pressure (PImax, placebo load) or 60 % of PImax (experimental load). Inspiratory muscle training could be a novel exercise modality to be used to decrease glucose levels and glucose variability. ClinicalTrials.gov NCT02292810 .

  15. From Fractal Trees to Deltaic Networks

    NASA Astrophysics Data System (ADS)

    Cazanacli, D.; Wolinsky, M. A.; Sylvester, Z.; Cantelli, A.; Paola, C.

    2013-12-01

    Geometric networks that capture many aspects of natural deltas can be constructed from simple concepts from graph theory and normal probability distributions. Fractal trees with symmetrical geometries are the result of replicating two simple geometric elements, line segments whose lengths decrease and bifurcation angles that are commonly held constant. Branches could also have a thickness, which in the case of natural distributary systems is the equivalent of channel width. In river- or wave-dominated natural deltas, the channel width is a function of discharge. When normal variations around the mean values for length, bifurcating angles, and discharge are applied, along with either pruning of 'clashing' branches or merging (equivalent to channel confluence), fractal trees start resembling natural deltaic networks, except that the resulting channels are unnaturally straight. Introducing a bifurcation probability fewer, naturally curved channels are obtained. If there is no bifurcation, the direction of each new segment depends on the direction the previous segment upstream (correlated random walk) and, to a lesser extent, on a general direction of growth (directional bias). When bifurcation occurs, the resulting two directions also depend on the bifurcation angle and the discharge split proportions, with the dominant branch following the direction of the upstream parent channel closely. The bifurcation probability controls the channel density and, in conjunction with the variability of the directional angles, the overall curvature of the channels. The growth of the network in effect is associated with net delta progradation. The overall shape and shape evolution of the delta depend mainly on the bifurcation angle average size and angle variability coupled with the degree of dominant direction dependency (bias). The proposed algorithm demonstrates how, based on only a few simple rules, a wide variety of channel networks resembling natural deltas, can be replicated. Network Example

  16. CSI 2264: CHARACTERIZING YOUNG STARS IN NGC 2264 WITH STOCHASTICALLY VARYING LIGHT CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauffer, John; Rebull, Luisa; Carey, Sean

    2016-03-15

    We provide CoRoT and Spitzer light curves and other supporting data for 17 classical T Tauri stars in NGC 2264 whose CoRoT light curves exemplify the “stochastic” light curve class as defined in 2014 by Cody et al. The most probable physical mechanism to explain the optical variability within this light curve class is time-dependent mass accretion onto the stellar photosphere, producing transient hot spots. Where we have appropriate spectral data, we show that the veiling variability in these stars is consistent in both amplitude and timescale with the optical light curve morphology. The veiling variability is also well-correlated with the strengthmore » of the He i 6678 Å emission line, predicted by models to arise in accretion shocks on or near the stellar photosphere. Stars with accretion burst light curve morphology also have variable mass accretion. The stochastic and accretion burst light curves can both be explained by a simple model of randomly occurring flux bursts, with the stochastic light curve class having a higher frequency of lower amplitude events. Members of the stochastic light curve class have only moderate mass accretion rates. Their Hα profiles usually have blueshifted absorption features, probably originating in a disk wind. The lack of periodic signatures in the light curves suggests that little of the variability is due to long-lived hot spots rotating into or out of our line of sight; instead, the primary driver of the observed photometric variability is likely to be instabilities in the inner disk that lead to variable mass accretion.« less

  17. Geopathic stress zones: short-term effects on work performance and well-being?

    PubMed

    Augner, Christoph; Hacker, Gerhard W; Jekel, Ilse

    2010-06-01

    The aim of the study was to evaluate whether two different locations in the same room as tested by dowsers ("geopathic stress zone" [GSZ] versus "more neutral zone" [NZ]) would show significant short-term effects on work performance and well-being. It was also tested whether a device reported to "neutralize" GSZ would influence results obtained with the specific setup used in this study. This was a blinded, randomized, short-term laboratory experiment using a within-subject design. The study was conducted in the laboratory of the Research Institute for Frontier Questions of Medicine and Biotechnology at Salzburg Federal Hospital. The subjects were 26 persons, aged 20-57. Participants had to accomplish reaction tasks during three different conditions: GSZ, NZ, and GSZ with a device reported to "neutralize" GSZ. These conditions were counterbalanced into six different sequences and randomized to the subjects. At the end of each condition, a standardized well-being questionnaire had to be completed. Dependent variables were reactive stress tolerance (reaction time, timely right answers, right answers, false answers, left out) and well-being (described by six adjectives). No location-dependent effects on performance during reactive stress tolerance test were seen. For well-being, analysis of variance revealed a trend (p = 0.07) and showed significantly poorer well-being during the GSZ condition compared to NZ (p = 0.01). This study shows that well-being can be location dependent and that this might be caused by a so-called GSZ. However, in our short-term experiment, factors of work performance tested remained unaffected.

  18. A study of the effects of gender and different instructional media (computer-assisted instruction tutorials vs. textbook) on student attitudes and achievement in a team-taught integrated science class

    NASA Astrophysics Data System (ADS)

    Eardley, Julie Anne

    The purpose of this study was to determine the effect of different instructional media (computer assisted instruction (CAI) tutorial vs. traditional textbook) on student attitudes toward science and computers and achievement scores in a team-taught integrated science course, ENS 1001, "The Whole Earth Course," which was offered at Florida Institute of Technology during the Fall 2000 term. The effect of gender on student attitudes toward science and computers and achievement scores was also investigated. This study employed a randomized pretest-posttest control group experimental research design with a sample of 30 students (12 males and 18 females). Students had registered for weekly lab sessions that accompanied the course and had been randomly assigned to the treatment or control group. The treatment group used a CAI tutorial for completing homework assignments and the control group used the required textbook for completing homework assignments. The Attitude toward Science and Computers Questionnaire and Achievement Test were the two instruments administered during this study to measure students' attitudes and achievement score changes. A multivariate analysis of covariance (MANCOVA), using hierarchical multiple regression/correlation (MRC), was employed to determine: (1) treatment versus control group attitude and achievement differences; and (2) male versus female attitude and achievement differences. The differences between the treatment group's and control group's homework averages were determined by t test analyses. The overall MANCOVA model was found to be significant at p < .05. Examining research factor set independent variables separately resulted in gender being the only variable that significantly contributed in explaining the variability in a dependent variable, attitudes toward science and computers. T test analyses of the homework averages showed no significant differences. Contradictory to the findings of this study, anecdotal information from personal communication, course evaluations, and homework assignments indicated favorable attitudes and higher achievement scores for a majority of the students in the treatment group.

  19. A new phenotyping pipeline reveals three types of lateral roots and a random branching pattern in two cereals.

    PubMed

    Passot, Sixtine; Moreno-Ortega, Beatriz; Moukouanga, Daniel; Balsera, Crispulo; Guyomarc'h, Soazig; Lucas, Mikael; Lobet, Guillaume; Laplaze, Laurent; Muller, Bertrand; Guédon, Yann

    2018-05-11

    Recent progress in root phenotyping has focused mainly on increasing throughput for genetic studies while identifying root developmental patterns has been comparatively underexplored. We introduce a new phenotyping pipeline for producing high-quality spatio-temporal root system development data and identifying developmental patterns within these data. The SmartRoot image analysis system and temporal and spatial statistical models were applied to two cereals, pearl millet (Pennisetum glaucum) and maize (Zea mays). Semi-Markov switching linear models were used to cluster lateral roots based on their growth rate profiles. These models revealed three types of lateral roots with similar characteristics in both species. The first type corresponds to fast and accelerating roots, the second to rapidly arrested roots, and the third to an intermediate type where roots cease elongation after a few days. These types of lateral roots were retrieved in different proportions in a maize mutant affected in auxin signaling, while the first most vigorous type was absent in maize plants exposed to severe shading. Moreover, the classification of growth rate profiles was mirrored by a ranking of anatomical traits in pearl millet. Potential dependencies in the succession of lateral root types along the primary root were then analyzed using variable-order Markov chains. The lateral root type was not influenced by the shootward neighbor root type or by the distance from this root. This random branching pattern of primary roots was remarkably conserved, despite the high variability of root systems in both species. Our phenotyping pipeline opens the door to exploring the genetic variability of lateral root developmental patterns. {copyright, serif} 2018 American Society of Plant Biologists. All rights reserved.

  20. Allowing for Horizontally Heterogeneous Clouds and Generalized Overlap in an Atmospheric GCM

    NASA Technical Reports Server (NTRS)

    Lee, D.; Oreopoulos, L.; Suarez, M.

    2011-01-01

    While fully accounting for 3D effects in Global Climate Models (GCMs) appears not realistic at the present time for a variety of reasons such as computational cost and unavailability of 3D cloud structure in the models, incorporation in radiation schemes of subgrid cloud variability described by one-point statistics is now considered feasible and is being actively pursued. This development has gained momentum once it was demonstrated that CPU-intensive spectrally explicit Independent Column Approximation (lCA) can be substituted by stochastic Monte Carlo ICA (McICA) calculations where spectral integration is accomplished in a manner that produces relatively benign random noise. The McICA approach has been implemented in Goddard's GEOS-5 atmospheric GCM as part of the implementation of the RRTMG radiation package. GEOS-5 with McICA and RRTMG can handle horizontally variable clouds which can be set via a cloud generator to arbitrarily overlap within the full spectrum of maximum and random both in terms of cloud fraction and layer condensate distributions. In our presentation we will show radiative and other impacts of the combined horizontal and vertical cloud variability on multi-year simulations of an otherwise untuned GEOS-5 with fixed SSTs. Introducing cloud horizontal heterogeneity without changing the mean amounts of condensate reduces reflected solar and increases thermal radiation to space, but disproportionate changes may increase the radiative imbalance at TOA. The net radiation at TOA can be modulated by allowing the parameters of the generalized overlap and heterogeneity scheme to vary, a dependence whose behavior we will discuss. The sensitivity of the cloud radiative forcing to the parameters of cloud horizontal heterogeneity and comparisons of CERES-derived forcing will be shown.

  1. Soil Sampling Techniques For Alabama Grain Fields

    NASA Technical Reports Server (NTRS)

    Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.

    2003-01-01

    Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.

  2. An Integrated Probabilistic-Fuzzy Assessment of Uncertainty Associated with Human Health Risk to MSW Landfill Leachate Contamination

    NASA Astrophysics Data System (ADS)

    Mishra, H.; Karmakar, S.; Kumar, R.

    2016-12-01

    Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.

  3. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  4. Radio Occultation Investigation of the Rings of Saturn and Uranus

    NASA Technical Reports Server (NTRS)

    Marouf, Essam A.

    1997-01-01

    The proposed work addresses two main objectives: (1) to pursue the development of the random diffraction screen model for analytical/computational characterization of the extinction and near-forward scattering by ring models that include particle crowding, uniform clustering, and clustering along preferred orientations (anisotropy). The characterization is crucial for proper interpretation of past (Voyager) and future (Cassini) ring, occultation observations in terms of physical ring properties, and is needed to address outstanding puzzles in the interpretation of the Voyager radio occultation data sets; (2) to continue the development of spectral analysis techniques to identify and characterize the power scattered by all features of Saturn's rings that can be resolved in the Voyager radio occultation observations, and to use the results to constrain the maximum particle size and its abundance. Characterization of the variability of surface mass density among the main ring, features and within individual features is important for constraining the ring mass and is relevant to investigations of ring dynamics and origin. We completed the developed of the stochastic geometry (random screen) model for the interaction of electromagnetic waves with of planetary ring models; used the model to relate the oblique optical depth and the angular spectrum of the near forward scattered signal to statistical averages of the stochastic geometry of the randomly blocked area. WE developed analytical results based on the assumption of Poisson statistics for particle positions, and investigated the dependence of the oblique optical depth and angular spectrum on the fractional area blocked, vertical ring profile, and incidence angle when the volume fraction is small. Demonstrated agreement with the classical radiative transfer predictions for oblique incidence. Also developed simulation procedures to generate statistical realizations of random screens corresponding to uniformly packed ring models, and used the results to characterize dependence of the extinction and near-forward scattering on ring thickness, packing fraction, and the ring opening angle.

  5. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  6. Life Predicted in a Probabilistic Design Space for Brittle Materials With Transient Loads

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Palfi, Tamas; Reh, Stefan

    2005-01-01

    Analytical techniques have progressively become more sophisticated, and now we can consider the probabilistic nature of the entire space of random input variables on the lifetime reliability of brittle structures. This was demonstrated with NASA s CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code combined with the commercially available ANSYS/Probabilistic Design System (ANSYS/PDS), a probabilistic analysis tool that is an integral part of the ANSYS finite-element analysis program. ANSYS/PDS allows probabilistic loads, component geometry, and material properties to be considered in the finite-element analysis. CARES/Life predicts the time dependent probability of failure of brittle material structures under generalized thermomechanical loading--such as that found in a turbine engine hot-section. Glenn researchers coupled ANSYS/PDS with CARES/Life to assess the effects of the stochastic variables of component geometry, loading, and material properties on the predicted life of the component for fully transient thermomechanical loading and cyclic loading.

  7. The effect of music reinforcement for non-nutritive sucking on nipple feeding of premature infants.

    PubMed

    Standley, Jayne M; Cassidy, Jane; Grant, Roy; Cevasco, Andrea; Szuch, Catherine; Nguyen, Judy; Walworth, Darcy; Procelli, Danielle; Jarred, Jennifer; Adams, Kristen

    2010-01-01

    In this randomized, controlled multi-site study, the pacifier-activated-lullaby system (PAL) was used with 68 premature infants. Dependent variables were (a) total number of days prior to nipple feeding, (b) days of nipple feeding, (c) discharge weight, and (d) overall weight gain. Independent variables included contingent music reinforcement for non-nutritive sucking for PAL intervention at 32 vs. 34 vs. 36 weeks adjusted gestational age (AGA), with each age group subdivided into three trial conditions: control consisting of no PAL used vs. one 15-minute PAL trial vs. three 15-minute PAL trials. At 34 weeks, PAL trials significantly shortened gavage feeding length, and three trials were significantly better than one trial. At 32 weeks, PAL trials lengthened gavage feeding. Female infants learned to nipple feed significantly faster than male infants. It was noted that PAL babies went home sooner after beginning to nipple feed, a trend that was not statistically significant.

  8. DGR mutagenic transposition occurs via hypermutagenic reverse transcription primed by nicked template RNA

    PubMed Central

    Naorem, Santa S.; Han, Jin; Wang, Shufang; Lee, William R.; Heng, Xiao; Miller, Jeff F.

    2017-01-01

    Diversity-generating retroelements (DGRs) are molecular evolution machines that facilitate microbial adaptation to environmental changes. Hypervariation occurs via a mutagenic retrotransposition process from a template repeat (TR) to a variable repeat (VR) that results in adenine-to-random nucleotide conversions. Here we show that reverse transcription of the Bordetella phage DGR is primed by an adenine residue in TR RNA and is dependent on the DGR-encoded reverse transcriptase (bRT) and accessory variability determinant (Avd ), but is VR-independent. We also find that the catalytic center of bRT plays an essential role in site-specific cleavage of TR RNA for cDNA priming. Adenine-specific mutagenesis occurs during reverse transcription and does not involve dUTP incorporation, indicating it results from bRT-catalyzed misincorporation of standard deoxyribonucleotides. In vivo assays show that this hybrid RNA-cDNA molecule is required for mutagenic transposition, revealing a unique mechanism of DNA hypervariation for microbial adaptation. PMID:29109248

  9. Variability in the management of lithium poisoning.

    PubMed

    Roberts, Darren M; Gosselin, Sophie

    2014-01-01

    Three patterns of lithium poisoning are recognized: acute, acute-on-chronic, and chronic. Intravenous fluids with or without an extracorporeal treatment are the mainstay of treatment; their respective roles may differ depending on the mode of poisoning being treated. Recommendations for treatment selection are available but these are based on a small number of observational studies and their uptake by clinicians is not known. Clinician decision-making in the treatment of four cases of lithium poisoning was assessed at a recent clinical toxicology meeting using an audience response system. Variability in treatment decisions was evident in addition to discordance with published recommendations. Participants did not consistently indicate that hemodialysis was the first-line treatment, instead opting for a conservative approach, and continuous modalities were viewed favorably; this is in contrast to recommendations in some references. The development of multidisciplinary consensus guidelines may improve the management of patients with lithium poisoning but prospective randomized controlled trials are required to more clearly define the role of extracorporeal treatments. © 2014 Wiley Periodicals, Inc.

  10. Optimizing Multi-Product Multi-Constraint Inventory Control Systems with Stochastic Replenishments

    NASA Astrophysics Data System (ADS)

    Allah Taleizadeh, Ata; Aryanezhad, Mir-Bahador; Niaki, Seyed Taghi Akhavan

    Multi-periodic inventory control problems are mainly studied employing two assumptions. The first is the continuous review, where depending on the inventory level orders can happen at any time and the other is the periodic review, where orders can only happen at the beginning of each period. In this study, we relax these assumptions and assume that the periodic replenishments are stochastic in nature. Furthermore, we assume that the periods between two replenishments are independent and identically random variables. For the problem at hand, the decision variables are of integer-type and there are two kinds of space and service level constraints for each product. We develop a model of the problem in which a combination of back-order and lost-sales are considered for the shortages. Then, we show that the model is of an integer-nonlinear-programming type and in order to solve it, a search algorithm can be utilized. We employ a simulated annealing approach and provide a numerical example to demonstrate the applicability of the proposed methodology.

  11. [Spatial differentiation and impact factors of Yutian Oasis's soil surface salt based on GWR model].

    PubMed

    Yuan, Yu Yun; Wahap, Halik; Guan, Jing Yun; Lu, Long Hui; Zhang, Qin Qin

    2016-10-01

    In this paper, topsoil salinity data gathered from 24 sampling sites in the Yutian Oasis were used, nine different kinds of environmental variables closely related to soil salinity were selec-ted as influencing factors, then, the spatial distribution characteristics of topsoil salinity and spatial heterogeneity of influencing factors were analyzed by combining the spatial autocorrelation with traditional regression analysis and geographically weighted regression model. Results showed that the topsoil salinity in Yutian Oasis was not of random distribution but had strong spatial dependence, and the spatial autocorrelation index for topsoil salinity was 0.479. Groundwater salinity, groundwater depth, elevation and temperature were the main factors influencing topsoil salt accumulation in arid land oases and they were spatially heterogeneous. The nine selected environmental variables except soil pH had significant influences on topsoil salinity with spatial disparity. GWR model was superior to the OLS model on interpretation and estimation of spatial non-stationary data, also had a remarkable advantage in visualization of modeling parameters.

  12. Does wastewater discharge have relations with increase of Turner syndrome and Down syndrome?

    PubMed

    Choi, Intae

    2017-01-01

    The purpose of this study is to examine whether water and air pollutants have a relationship with an increase in the genetic disorders Turner syndrome and Down syndrome, which are caused by congenital chromosomal abnormalities, and to generate a hypothesis about the genetic health effects of environmental pollutants. A panel regression based on random effect was conducted on Korea's metropolitan councils from 2012 to 2014. The dependent variable was the number of Turner syndrome and Down syndrome cases, and the main independent variables were those regarding the water and air pollution. Air pollutants did not have a significant impact on the number of Turner syndrome and Down syndrome cases; however, the increase in number of wastewater discharge companies did have a significant relationship with the number of cases. The more the number of wastewater discharge companies, the more the number Turner syndrome and Down syndrome cases were observed. Therefore, scientific investigation on water and air pollutants in relation with genetic health effects needs to be performed.

  13. Do gender gaps in education and health affect economic growth? A cross-country study from 1975 to 2010.

    PubMed

    Mandal, Bidisha; Batina, Raymond G; Chen, Wen

    2018-05-01

    We use system-generalized method-of-moments to estimate the effect of gender-specific human capital on economic growth in a cross-country panel of 127 countries between 1975 and 2010. There are several benefits of using this methodology. First, a dynamic lagged dependent econometric model is suitable to address persistence in per capita output. Second, the generalized method-of-moments estimator uses dynamic properties of the data to generate appropriate instrumental variables to address joint endogeneity of the explanatory variables. Third, we allow the measurement error to include unobserved country-specific effect and random noise. We include two gender-disaggregated measures of human capital-education and health. We find that gender gap in health plays a critical role in explaining economic growth in developing countries. Our results provide aggregate evidence that returns to investments in health systematically differ across gender and between low-income and high-income countries. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  15. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  16. Variable versus conventional lung protective mechanical ventilation during open abdominal surgery: study protocol for a randomized controlled trial.

    PubMed

    Spieth, Peter M; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo

    2014-05-02

    General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary function and reduces systemic inflammatory response. However, it is currently not known whether patients undergoing open abdominal surgery might benefit from intraoperative variable ventilation. The PROtective VARiable ventilation trial ('PROVAR') is a single center, randomized controlled trial enrolling 50 patients who are planning for open abdominal surgery expected to last longer than 3 hours. PROVAR compares conventional (non-variable) lung protective ventilation (CV) with variable lung protective ventilation (VV) regarding pulmonary function and inflammatory response. The primary endpoint of the study is the forced vital capacity on the first postoperative day. Secondary endpoints include further lung function tests, plasma cytokine levels, spatial distribution of ventilation assessed by means of electrical impedance tomography and postoperative pulmonary complications. We hypothesize that VV improves lung function and reduces systemic inflammatory response compared to CV in patients receiving mechanical ventilation during general anesthesia for open abdominal surgery longer than 3 hours. PROVAR is the first randomized controlled trial aiming at intra- and postoperative effects of VV on lung function. This study may help to define the role of VV during general anesthesia requiring mechanical ventilation. Clinicaltrials.gov NCT01683578 (registered on September 3 3012).

  17. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  18. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  19. Probe-specific mixed-model approach to detect copy number differences using multiplex ligation-dependent probe amplification (MLPA)

    PubMed Central

    González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier

    2008-01-01

    Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760

  20. Reliability Based Design for a Raked Wing Tip of an Airframe

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2011-01-01

    A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.

Top