Sample records for zero-mean random variables

  1. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  2. Non-zero mean and asymmetry of neuronal oscillations have different implications for evoked responses.

    PubMed

    Nikulin, Vadim V; Linkenkaer-Hansen, Klaus; Nolte, Guido; Curio, Gabriel

    2010-02-01

    The aim of the present study was to show analytically and with simulations that it is the non-zero mean of neuronal oscillations, and not an amplitude asymmetry of peaks and troughs, that is a prerequisite for the generation of evoked responses through a mechanism of amplitude modulation of oscillations. Secondly, we detail the rationale and implementation of the "baseline-shift index" (BSI) for deducing whether empirical oscillations have non-zero mean. Finally, we illustrate with empirical data why the "amplitude fluctuation asymmetry" (AFA) index should be used with caution in research aimed at explaining variability in evoked responses through a mechanism of amplitude modulation of ongoing oscillations. An analytical approach, simulations and empirical MEG data were used to compare the specificity of BSI and AFA index to differentiate between a non-zero mean and a non-sinusoidal shape of neuronal oscillations. Both the BSI and the AFA index were sensitive to the presence of non-zero mean in neuronal oscillations. The AFA index, however, was also sensitive to the shape of oscillations even when they had a zero mean. Our findings indicate that it is the non-zero mean of neuronal oscillations, and not an amplitude asymmetry of peaks and troughs, that is a prerequisite for the generation of evoked responses through a mechanism of amplitude modulation of oscillations. A clear distinction should be made between the shape and non-zero mean properties of neuronal oscillations. This is because only the latter contributes to evoked responses, whereas the former does not. Copyright (c) 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  4. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  5. Three-part joint modeling methods for complex functional data mixed with zero-and-one-inflated proportions and zero-inflated continuous outcomes with skewness.

    PubMed

    Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J

    2018-02-20

    We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in

  6. On the mean and variance of the writhe of random polygons.

    PubMed

    Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.

  7. On the mean and variance of the writhe of random polygons

    PubMed Central

    Portillo, J.; Diao, Y.; Scharein, R.; Arsuaga, J.; Vazquez, M.

    2013-01-01

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an “ideal” conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n) behaves as a linear function of the length of the equilateral random polygon. PMID:25685182

  8. Variable speed wind turbine generator with zero-sequence filter

    DOEpatents

    Muljadi, Eduard

    1998-01-01

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.

  9. Variable Speed Wind Turbine Generator with Zero-sequence Filter

    DOEpatents

    Muljadi, Eduard

    1998-08-25

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.

  10. Variable speed wind turbine generator with zero-sequence filter

    DOEpatents

    Muljadi, E.

    1998-08-25

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility. 14 figs.

  11. Structural zeroes and zero-inflated models.

    PubMed

    He, Hua; Tang, Wan; Wang, Wenjuan; Crits-Christoph, Paul

    2014-08-01

    In psychosocial and behavioral studies count outcomes recording the frequencies of the occurrence of some health or behavior outcomes (such as the number of unprotected sexual behaviors during a period of time) often contain a preponderance of zeroes because of the presence of 'structural zeroes' that occur when some subjects are not at risk for the behavior of interest. Unlike random zeroes (responses that can be greater than zero, but are zero due to sampling variability), structural zeroes are usually very different, both statistically and clinically. False interpretations of results and study findings may result if differences in the two types of zeroes are ignored. However, in practice, the status of the structural zeroes is often not observed and this latent nature complicates the data analysis. In this article, we focus on one model, the zero-inflated Poisson (ZIP) regression model that is commonly used to address zero-inflated data. We first give a brief overview of the issues of structural zeroes and the ZIP model. We then given an illustration of ZIP with data from a study on HIV-risk sexual behaviors among adolescent girls. Sample codes in SAS and Stata are also included to help perform and explain ZIP analyses.

  12. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  13. Stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation.

    PubMed

    Wang, Deli; Xu, Wei; Zhao, Xiangrong

    2016-03-01

    This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.

  14. Random phase approximation and cluster mean field studies of hard core Bose Hubbard model

    NASA Astrophysics Data System (ADS)

    Alavani, Bhargav K.; Gaude, Pallavi P.; Pai, Ramesh V.

    2018-04-01

    We investigate zero temperature and finite temperature properties of the Bose Hubbard Model in the hard core limit using Random Phase Approximation (RPA) and Cluster Mean Field Theory (CMFT). We show that our RPA calculations are able to capture quantum and thermal fluctuations significantly better than CMFT.

  15. A new mean estimator using auxiliary variables for randomized response models

    NASA Astrophysics Data System (ADS)

    Ozgul, Nilgun; Cingi, Hulya

    2013-10-01

    Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.

  16. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Variable selection for distribution-free models for longitudinal zero-inflated count responses.

    PubMed

    Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M

    2016-07-20

    Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  19. Disease Mapping of Zero-excessive Mesothelioma Data in Flanders

    PubMed Central

    Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel

    2016-01-01

    Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590

  20. Disease mapping of zero-excessive mesothelioma data in Flanders.

    PubMed

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel

    2017-01-01

    To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  2. VARIABLE TIME DELAY MEANS

    DOEpatents

    Clemensen, R.E.

    1959-11-01

    An electrically variable time delay line is described which may be readily controlled simuitaneously with variable impedance matching means coupied thereto such that reflections are prevented. Broadly, the delay line includes a signal winding about a magnetic core whose permeability is electrically variable. Inasmuch as the inductance of the line varies directly with the permeability, the time delay and characteristic impedance of the line both vary as the square root of the permeability. Consequently, impedance matching means may be varied similariy and simultaneously w:th the electrically variable permeability to match the line impedance over the entire range of time delay whereby reflections are prevented.

  3. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  4. Multilevel covariance regression with correlated random effects in the mean and variance structure.

    PubMed

    Quintero, Adrian; Lesaffre, Emmanuel

    2017-09-01

    Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Marginalized multilevel hurdle and zero-inflated models for overdispersed and correlated count data with excess zeros.

    PubMed

    Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert

    2014-11-10

    Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is

  6. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  7. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    NASA Astrophysics Data System (ADS)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  8. Improved ensemble-mean forecasting of ENSO events by a zero-mean stochastic error model of an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Zhu, Jiang

    2017-04-01

    How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.

  9. Summary of percentages of zero daily mean streamflow for 712 U.S. Geological Survey streamflow-gaging stations in Texas through 2003

    USGS Publications Warehouse

    Asquith, William H.; Vrabel, Joseph; Roussel, Meghan C.

    2007-01-01

    Analysts and managers of surface-water resources might have interest in the zero-flow potential for U.S.Geological Survey (USGS) streamflow-gaging stations in Texas. The USGS, in cooperation with the Texas Commission on Environmental Quality, initiated a data and reporting process to generate summaries of percentages of zero daily mean streamflow for 712 USGS streamflow-gaging stations in Texas. A summary of the percentages of zero daily mean streamflow for most active and inactive, continuous-record gaging stations in Texas provides valuable information by conveying the historical perspective for zero-flow potential for the watershed. The summaries of percentages of zero daily mean streamflow for each station are graphically depicted using two thematic perspectives: annual and monthly. The annual perspective consists of graphs of annual percentages of zero streamflow by year with the addition of lines depicting the mean and median annual percentage of zero streamflow. Monotonic trends in the percentages of zero streamflow also are identified using Kendall's T. The monthly perspective consists of graphs of the percentage of zero streamflow by month with lines added to indicate the mean and median monthly percentage of zero streamflow. One or more summaries could be used in a watershed, river basin, or other regional context by analysts and managers of surface-water resources to guide scientific, regulatory, or other inquiries of zero-flow or other low-flow conditions in Texas.

  10. The living Drake equation of the Tau Zero Foundation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-03-01

    The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation

  11. Multivariate random-parameters zero-inflated negative binomial regression model: an application to estimate crash frequencies at intersections.

    PubMed

    Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan

    2014-09-01

    Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. A prospective randomized trial on the use of Coca-Cola Zero(®) vs water for polyethylene glycol bowel preparation before colonoscopy.

    PubMed

    Seow-En, I; Seow-Choen, F

    2016-07-01

    The study aimed to determine whether Coca-Cola (Coke) Zero is a safe and effective solvent for polyethylene glycol (PEG). Between December 2013 and April 2014, 209 healthy adults (115 men, 95 women) scheduled for elective colonoscopy were randomized to use either Coke Zero (n = 100) or drinking water (n = 109) with PEG as bowel preparation. Each patient received two sachets of PEG to dissolve in 2 l of solvent, to be completed 6 h before colonoscopy. Serum electrolytes were measured before and after preparation. Bowel cleanliness and colonoscopy findings were recorded. Palatability of solution, adverse effects, time taken to complete and willingness to repeat the preparation were documented via questionnaire. Mean palatability scores in the Coke Zero group were significantly better compared with the control group (2.31 ± 0.61 vs 2.51 ± 0.63, P = 0.019), with a higher proportion willing to use the same preparation again (55% vs 43%). The mean time taken to complete the PEG + Coke Zero solution was significantly faster (74 ± 29 min vs 86 ± 31 min, P = 0.0035). The quality of bowel cleansing was also significantly better in the Coke Zero group (P = 0.0297). There was no difference in the frequency of adverse events (P = 0.759) or the polyp detection rate (32% vs 31.2%). Consumption of either preparation did not significantly affect electrolyte levels or hydration status. Coke Zero is a useful alternative solvent for PEG. It is well tolerated, more palatable, leads to quicker consumption of the bowel preparation and results in better quality cleansing. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.

  13. Couette-Poiseuille flow experiment with zero mean advection velocity: Subcritical transition to turbulence

    NASA Astrophysics Data System (ADS)

    Klotz, L.; Lemoult, G.; Frontczak, I.; Tuckerman, L. S.; Wesfreid, J. E.

    2017-04-01

    We present an experimental setup that creates a shear flow with zero mean advection velocity achieved by counterbalancing the nonzero streamwise pressure gradient by moving boundaries, which generates plane Couette-Poiseuille flow. We obtain experimental results in the transitional regime for this flow. Using flow visualization, we characterize the subcritical transition to turbulence in Couette-Poiseuille flow and show the existence of turbulent spots generated by a permanent perturbation. Due to the zero mean advection velocity of the base profile, these turbulent structures are nearly stationary. We distinguish two regions of the turbulent spot: the active turbulent core, which is characterized by waviness of the streaks similar to traveling waves, and the surrounding region, which includes in addition the weak undisturbed streaks and oblique waves at the laminar-turbulent interface. We also study the dependence of the size of these two regions on Reynolds number. Finally, we show that the traveling waves move in the downstream (Poiseuille) direction.

  14. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts.

    PubMed

    Preisser, John S; Long, D Leann; Stamm, John W

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.

  15. Evaluation of Kurtosis into the product of two normally distributed variables

    NASA Astrophysics Data System (ADS)

    Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio

    2016-06-01

    Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.

  16. Leveraging prognostic baseline variables to gain precision in randomized trials

    PubMed Central

    Colantuoni, Elizabeth; Rosenblum, Michael

    2015-01-01

    We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751

  17. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  18. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  19. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  20. Uncertainty in Random Forests: What does it mean in a spatial context?

    NASA Astrophysics Data System (ADS)

    Klump, Jens; Fouedjio, Francky

    2017-04-01

    Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial

  1. New variable selection methods for zero-inflated count data with applications to the substance abuse field

    PubMed Central

    Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming

    2011-01-01

    Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207

  2. Lean, Mean and Green: An Affordable Net Zero School

    ERIC Educational Resources Information Center

    Stanfield, Kenneth

    2010-01-01

    From its conception, Richardsville Elementary was designed to be an affordable net zero facility. The design team explored numerous energy saving strategies to dramatically reduce energy consumption. By reducing energy use to 19.31 kBtus annually, the net zero goal could be realized through the implementation of a solar array capable of producing…

  3. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  4. A multi-assets artificial stock market with zero-intelligence traders

    NASA Astrophysics Data System (ADS)

    Ponta, L.; Raberto, M.; Cincotti, S.

    2011-01-01

    In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.

  5. Vector solution for the mean electromagnetic fields in a layer of random particles

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Seker, S. S.; Levine, D. M.

    1986-01-01

    The mean electromagnetic fields are found in a layer of randomly oriented particles lying over a half space. A matrix-dyadic formulation of Maxwell's equations is employed in conjunction with the Foldy-Lax approximation to obtain equations for the mean fields. A two variable perturbation procedure, valid in the limit of small fractional volume, is then used to derive uncoupled equations for the slowly varying amplitudes of the mean wave. These equations are solved to obtain explicit expressions for the mean electromagnetic fields in the slab region in the general case of arbitrarily oriented particles and arbitrary polarization of the incident radiation. Numerical examples are given for the application to remote sensing of vegetation.

  6. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany

    PubMed Central

    Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun

    2017-01-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498

  7. Benford's law and continuous dependent random variables

    NASA Astrophysics Data System (ADS)

    Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine

    2018-01-01

    Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.

  8. Characterizing multiscale variability of zero intermittency in spatial rainfall

    NASA Technical Reports Server (NTRS)

    Kumar, Praveen; Foufoula-Georgiou, Efi

    1994-01-01

    In this paper the authors study how zero intermittency in spatial rainfall, as described by the fraction of area covered by rainfall, changes with spatial scale of rainfall measurement or representation. A statistical measure of intermittency that describes the size distribution of 'voids' (nonrainy areas imbedded inside rainy areas) as a function of scale is also introduced. Morphological algorithms are proposed for reconstructing rainfall intermittency at fine scales given the intermittency at coarser scales. These algorithms are envisioned to be useful in hydroclimatological studies where the rainfall spatial variability at the subgrid scale needs to be reconstructed from the results of synoptic- or mesoscale meteorological numerical models. The developed methodologies are demsonstrated and tested using data from a severe springtime midlatitude squall line and a mild midlatitude winter storm monitored by a meteorological radar in Norman, Oklahoma.

  9. An instrumental variable random-coefficients model for binary outcomes

    PubMed Central

    Chesher, Andrew; Rosen, Adam M

    2014-01-01

    In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048

  10. Analyzing Propensity Matched Zero-Inflated Count Outcomes in Observational Studies

    PubMed Central

    DeSantis, Stacia M.; Lazaridis, Christos; Ji, Shuang; Spinale, Francis G.

    2013-01-01

    Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching, and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set. PMID:24298197

  11. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.

    PubMed

    Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun

    2015-09-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Random Variables: Simulations and Surprising Connections.

    ERIC Educational Resources Information Center

    Quinn, Robert J.; Tomlinson, Stephen

    1999-01-01

    Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)

  13. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    NASA Astrophysics Data System (ADS)

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0

  14. Continuous-variable phase estimation with unitary and random linear disturbance

    NASA Astrophysics Data System (ADS)

    Delgado de Souza, Douglas; Genoni, Marco G.; Kim, M. S.

    2014-10-01

    We address the problem of continuous-variable quantum phase estimation in the presence of linear disturbance at the Hamiltonian level by means of Gaussian probe states. In particular we discuss both unitary and random disturbance by considering the parameter which characterizes the unwanted linear term present in the Hamiltonian as fixed (unitary disturbance) or random with a given probability distribution (random disturbance). We derive the optimal input Gaussian states at fixed energy, maximizing the quantum Fisher information over the squeezing angle and the squeezing energy fraction, and we discuss the scaling of the quantum Fisher information in terms of the output number of photons, nout. We observe that, in the case of unitary disturbance, the optimal state is a squeezed vacuum state and the quadratic scaling is conserved. As regards the random disturbance, we observe that the optimal squeezing fraction may not be equal to one and, for any nonzero value of the noise parameter, the quantum Fisher information scales linearly with the average number of photons. Finally, we discuss the performance of homodyne measurement by comparing the achievable precision with the ultimate limit imposed by the quantum Cramér-Rao bound.

  15. Zero-temperature directed polymer in random potential in 4+1 dimensions.

    PubMed

    Kim, Jin Min

    2016-12-01

    Zero-temperature directed polymer in random potential in 4+1 dimensions is described. The fluctuation ΔE(t) of the lowest energy of the polymer varies as t^{β} with β=0.159±0.007 for polymer length t and ΔE follows ΔE(L)∼L^{α} at saturation with α=0.275±0.009, where L is the system size. The dynamic exponent z≈1.73 is obtained from z=α/β. The estimated values of the exponents satisfy the scaling relation α+z=2 very well. We also monitor the end to end distance of the polymer and obtain z independently. Our results show that the upper critical dimension of the Kardar-Parisi-Zhang equation is higher than d=4+1 dimensions.

  16. A Random Variable Related to the Inversion Vector of a Partial Random Permutation

    ERIC Educational Resources Information Center

    Laghate, Kavita; Deshpande, M. N.

    2005-01-01

    In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.

  17. Calibration Variable Selection and Natural Zero Determination for Semispan and Canard Balances

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    Independent calibration variables for the characterization of semispan and canard wind tunnel balances are discussed. It is shown that the variable selection for a semispan balance is determined by the location of the resultant normal and axial forces that act on the balance. These two forces are the first and second calibration variable. The pitching moment becomes the third calibration variable after the normal and axial forces are shifted to the pitch axis of the balance. Two geometric distances, i.e., the rolling and yawing moment arms, are the fourth and fifth calibration variable. They are traditionally substituted by corresponding moments to simplify the use of calibration data during a wind tunnel test. A canard balance is related to a semispan balance. It also only measures loads on one half of a lifting surface. However, the axial force and yawing moment are of no interest to users of a canard balance. Therefore, its calibration variable set is reduced to the normal force, pitching moment, and rolling moment. The combined load diagrams of the rolling and yawing moment for a semispan balance are discussed. They may be used to illustrate connections between the wind tunnel model geometry, the test section size, and the calibration load schedule. Then, methods are reviewed that may be used to obtain the natural zeros of a semispan or canard balance. In addition, characteristics of three semispan balance calibration rigs are discussed. Finally, basic requirements for a full characterization of a semispan balance are reviewed.

  18. Properties of Zero-Free Transfer Function Matrices

    NASA Astrophysics Data System (ADS)

    D. O. Anderson, Brian; Deistler, Manfred

    Transfer functions of linear, time-invariant finite-dimensional systems with more outputs than inputs, as arise in factor analysis (for example in econometrics), have, for state-variable descriptions with generic entries in the relevant matrices, no finite zeros. This paper gives a number of characterizations of such systems (and indeed square discrete-time systems with no zeros), using state-variable, impulse response, and matrix-fraction descriptions. Key properties include the ability to recover the input values at any time from a bounded interval of output values, without any knowledge of an initial state, and an ability to verify the no-zero property in terms of a property of the impulse response coefficient matrices. Results are particularized to cases where the transfer function matrix in question may or may not have a zero at infinity or a zero at zero.

  19. Even and odd normalized zero modes in random interacting Majorana models respecting the parity P and the time-reversal-symmetry T

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile

    2018-06-01

    For random interacting Majorana models where the only symmetries are the parity P and the time-reversal-symmetry T, various approaches are compared to construct exact even and odd normalized zero modes Γ in finite size, i.e. Hermitian operators that commute with the Hamiltonian, that square to the identity, and that commute (even) or anticommute (odd) with the parity P. Even normalized zero-modes are well known under the name of ‘pseudo-spins’ in the field of many-body-localization or more precisely ‘local integrals of motion’ (LIOMs) in the many-body-localized-phase where the pseudo-spins happens to be spatially localized. Odd normalized zero-modes are popular under the name of ‘Majorana zero modes’ or ‘strong zero modes’. Explicit examples for small systems are described in detail. Applications to real-space renormalization procedures based on blocks containing an odd number of Majorana fermions are also discussed.

  20. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  1. Mass transfer from a sphere in an oscillating flow with zero mean velocity

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.; Lyman, Frederic A.

    1990-01-01

    A pseudospectral numerical method is used for the solution of the Navier-Stokes and mass transport equations for a sphere in a sinusoidally oscillating flow with zero mean velocity. The flow is assumed laminar and axisymmetric about the sphere's polar axis. Oscillating flow results were obtained for Reynolds numbers (based on the free-stream oscillatory flow amplitude) between 1 and 150, and Strouhal numbers between 1 and 1000. Sherwood numbers were computed and their dependency on the flow frequency and amplitude discussed. An assessment of the validity of the quasi-steady assumption for mass transfer is based on these results.

  2. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts

    PubMed Central

    Preisser, John S.; Long, D. Leann; Stamm, John W.

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962

  3. Probabilistic SSME blades structural response under random pulse loading

    NASA Technical Reports Server (NTRS)

    Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.

    1987-01-01

    The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.

  4. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  5. Weighting Mean and Variability during Confidence Judgments

    PubMed Central

    de Gardelle, Vincent; Mamassian, Pascal

    2015-01-01

    Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275

  6. Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs

    NASA Astrophysics Data System (ADS)

    Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur

    2018-03-01

    A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.

  7. A Variable-Selection Heuristic for K-Means Clustering.

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Cradit, J. Dennis

    2001-01-01

    Presents a variable selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. Subjected the heuristic to Monte Carlo testing across more than 2,200 datasets. Results indicate that the heuristic is extremely effective at eliminating masking variables. (SLD)

  8. A maximally selected test of symmetry about zero.

    PubMed

    Laska, Eugene; Meisner, Morris; Wanderling, Joseph

    2012-11-20

    The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p-values. Power is compared with the t-test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t-test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre-post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on-road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Zero field reversal probability in thermally assisted magnetization reversal

    NASA Astrophysics Data System (ADS)

    Prasetya, E. B.; Utari; Purnama, B.

    2017-11-01

    This paper discussed about zero field reversal probability in thermally assisted magnetization reversal (TAMR). Appearance of reversal probability in zero field investigated through micromagnetic simulation by solving stochastic Landau-Lifshitz-Gibert (LLG). The perpendicularly anisotropy magnetic dot of 50×50×20 nm3 is considered as single cell magnetic storage of magnetic random acces memory (MRAM). Thermally assisted magnetization reversal was performed by cooling writing process from near/almost Curie point to room temperature on 20 times runs for different randomly magnetized state. The results show that the probability reversal under zero magnetic field decreased with the increase of the energy barrier. The zero-field probability switching of 55% attained for energy barrier of 60 k B T and the reversal probability become zero noted at energy barrier of 2348 k B T. The higest zero-field switching probability of 55% attained for energy barrier of 60 k B T which corespond to magnetif field of 150 Oe for switching.

  10. A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    1997-01-01

    Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.

  11. From Zero Energy Buildings to Zero Energy Districts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polly, Ben; Kutscher, Chuck; Macumber, Dan

    Some U.S. cities are planning advanced districts that have goals for zero energy, water, waste, and/or greenhouse gas emissions. From an energy perspective, zero energy districts present unique opportunities to cost-effectively achieve high levels of energy efficiency and renewable energy penetration across a collection of buildings that may be infeasible at the individual building scale. These high levels of performance are accomplished through district energy systems that harness renewable and wasted energy at large scales and flexible building loads that coordinate with variable renewable energy supply. Unfortunately, stakeholders face a lack of documented processes, tools, and best practices to assistmore » them in achieving zero energy districts. The National Renewable Energy Laboratory (NREL) is partnering on two new district projects in Denver: the National Western Center and the Sun Valley Neighborhood. We are working closely with project stakeholders in their zero energy master planning efforts to develop the resources needed to resolve barriers and create replicable processes to support future zero energy district efforts across the United States. Initial results of these efforts include the identification and description of key zero energy district design principles (maximizing building efficiency, solar potential, renewable thermal energy, and load control), economic drivers, and master planning principles. The work has also resulted in NREL making initial enhancements to the U.S. Department of Energy's open source building energy modeling platform (OpenStudio and EnergyPlus) with the long-term goal of supporting the design and optimization of energy districts.« less

  12. Raw and Central Moments of Binomial Random Variables via Stirling Numbers

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…

  13. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  14. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    PubMed Central

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  15. Exact solution of mean-field plus an extended T = 1 nuclear pairing Hamiltonian in the seniority-zero symmetric subspace

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Dai, Lianrong; Draayer, Jerry P.

    2018-05-01

    An extended pairing Hamiltonian that describes multi-pair interactions among isospin T = 1 and angular momentum J = 0 neutron-neutron, proton-proton, and neutron-proton pairs in a spherical mean field, such as the spherical shell model, is proposed based on the standard T = 1 pairing formalism. The advantage of the model lies in the fact that numerical solutions within the seniority-zero symmetric subspace can be obtained more easily and with less computational time than those calculated from the mean-field plus standard T = 1 pairing model. Thus, large-scale calculations within the seniority-zero symmetric subspace of the model is feasible. As an example of the application, the average neutron-proton interaction in even-even N ∼ Z nuclei that can be suitably described in the f5 pg9 shell is estimated in the present model, with a focus on the role of np-pairing correlations.

  16. On the fluctuations of sums of independent random variables.

    PubMed

    Feller, W

    1969-07-01

    If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.

  17. Random field theory to interpret the spatial variability of lacustrine soils

    NASA Astrophysics Data System (ADS)

    Russo, Savino; Vessia, Giovanna

    2015-04-01

    The lacustrine soils are quaternary soils, dated from Pleistocene to Holocene periods, generated in low-energy depositional environments and characterized by soil mixture of clays, sands and silts with alternations of finer and coarser grain size layers. They are often met at shallow depth filling several tens of meters of tectonic or erosive basins typically placed in internal Appenine areas. The lacustrine deposits are often locally interbedded by detritic soils resulting from the failure of surrounding reliefs. Their heterogeneous lithology is associated with high spatial variability of physical and mechanical properties both along horizontal and vertical directions. The deterministic approach is still commonly adopted to accomplish the mechanical characterization of these heterogeneous soils where undisturbed sampling is practically not feasible (if the incoherent fraction is prevalent) or not spatially representative (if the cohesive fraction prevails). The deterministic approach consists on performing in situ tests, like Standard Penetration Tests (SPT) or Cone Penetration Tests (CPT) and deriving design parameters through "expert judgment" interpretation of the measure profiles. These readings of tip and lateral resistances (Rp and RL respectively) are almost continuous but highly variable in soil classification according to Schmertmann (1978). Thus, neglecting the spatial variability cannot be the best strategy to estimated spatial representative values of physical and mechanical parameters of lacustrine soils to be used for engineering applications. Hereafter, a method to draw the spatial variability structure of the aforementioned measure profiles is presented. It is based on the theory of the Random Fields (Vanmarcke 1984) applied to vertical readings of Rp measures from mechanical CPTs. The proposed method relies on the application of the regression analysis, by which the spatial mean trend and fluctuations about this trend are derived. Moreover, the

  18. Modeling continuous covariates with a "spike" at zero: Bivariate approaches.

    PubMed

    Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi

    2016-07-01

    In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. 40 CFR 180.5 - Zero tolerances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Zero tolerances. 180.5 Section 180.5... EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Definitions and Interpretative Regulations § 180.5 Zero tolerances. A zero tolerance means that no amount of the pesticide chemical may remain on the raw...

  20. 40 CFR 180.5 - Zero tolerances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Zero tolerances. 180.5 Section 180.5... EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Definitions and Interpretative Regulations § 180.5 Zero tolerances. A zero tolerance means that no amount of the pesticide chemical may remain on the raw...

  1. 40 CFR 180.5 - Zero tolerances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Zero tolerances. 180.5 Section 180.5... EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Definitions and Interpretative Regulations § 180.5 Zero tolerances. A zero tolerance means that no amount of the pesticide chemical may remain on the raw...

  2. THE EFFECT OF HORMONE THERAPY ON MEAN BLOOD PRESSURE AND VISIT-TO-VISIT BLOOD PRESSURE VARIABILITY IN POSTMENOPAUSAL WOMEN: RESULTS FROM THE WOMEN’S HEALTH INITIATIVE RANDOMIZED CONTROLLED TRIALS

    PubMed Central

    Shimbo, Daichi; Wang, Lu; Lamonte, Michael J.; Allison, Matthew; Wellenius, Gregory A.; Bavry, Anthony A.; Martin, Lisa W.; Aragaki, Aaron; Newman, Jonathan D.; Swica, Yael; Rossouw, Jacques E.; Manson, JoAnn E.; Wassertheil-Smoller, Sylvia

    2014-01-01

    Objectives Mean and visit-to-visit variability (VVV) of blood pressure are associated with an increased cardiovascular disease risk. We examined the effect of hormone therapy on mean and VVV of blood pressure in postmenopausal women from the Women’s Health Initiative (WHI) randomized controlled trials. Methods Blood pressure was measured at baseline and annually in the two WHI hormone therapy trials in which 10,739 and 16,608 postmenopausal women were randomized to conjugated equine estrogens (CEE, 0.625 mg/day) or placebo, and CEE plus medroxyprogesterone acetate (MPA, 2.5 mg/day) or placebo, respectively. Results At the first annual visit (Year 1), mean systolic blood pressure was 1.04 mmHg (95% CI 0.58, 1.50) and 1.35 mmHg (95% CI 0.99, 1.72) higher in the CEE and CEE+MPA arms respectively compared to corresponding placebos. These effects remained stable after Year 1. CEE also increased VVV of systolic blood pressure (ratio of VVV in CEE vs. placebo, 1.03, P<0.001), whereas CEE+MPA did not (ratio of VVV in CEE+MPA vs. placebo, 1.01, P=0.20). After accounting for study drug adherence, the effects of CEE and CEE+MPA on mean systolic blood pressure increased at Year 1, and the differences in the CEE and CEE+MPA arms vs. placebos also continued to increase after Year 1. Further, both CEE and CEE+MPA significantly increased VVV of systolic blood pressure (ratio of VVV in CEE vs. placebo, 1.04, P<0.001; ratio of VVV in CEE+MPA vs. placebo, 1.05, P<0.001). Conclusions Among postmenopausal women, CEE and CEE+MPA at conventional doses increased mean and VVV of systolic blood pressure. PMID:24991872

  3. Quantitation of Bone Growth Rate Variability in Rats Exposed to Micro-(near zero G) and Macrogravity (2G)

    NASA Technical Reports Server (NTRS)

    Bromage, Timothy G.; Doty, Stephen B.; Smolyar, Igor; Holton, Emily

    1997-01-01

    Our stated primary objective is to quantify the growth rate variability of rat lamellar bone exposed to micro- (near zero G: e.g., Cosmos 1887 & 2044; SLS-1 & SLS-2) and macrogravity (2G). The primary significance of the proposed work is that an elegant method will be established that unequivocally characterizes the morphological consequences of gravitational factors on developing bone. The integrity of this objective depends upon our successful preparation of thin sections suitable for imaging individual bone lamellae, and our imaging and quantitation of growth rate variability in populations of lamellae from individual bone samples.

  4. Marginalized zero-inflated negative binomial regression with application to dental caries

    PubMed Central

    Preisser, John S.; Das, Kalyan; Long, D. Leann; Divaris, Kimon

    2015-01-01

    The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared to marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034

  5. Zero-inflated spatio-temporal models for disease mapping.

    PubMed

    Torabi, Mahmoud

    2017-05-01

    In this paper, our aim is to analyze geographical and temporal variability of disease incidence when spatio-temporal count data have excess zeros. To that end, we consider random effects in zero-inflated Poisson models to investigate geographical and temporal patterns of disease incidence. Spatio-temporal models that employ conditionally autoregressive smoothing across the spatial dimension and B-spline smoothing over the temporal dimension are proposed. The analysis of these complex models is computationally difficult from the frequentist perspective. On the other hand, the advent of the Markov chain Monte Carlo algorithm has made the Bayesian analysis of complex models computationally convenient. Recently developed data cloning method provides a frequentist approach to mixed models that is also computationally convenient. We propose to use data cloning, which yields to maximum likelihood estimation, to conduct frequentist analysis of zero-inflated spatio-temporal modeling of disease incidence. One of the advantages of the data cloning approach is that the prediction and corresponding standard errors (or prediction intervals) of smoothing disease incidence over space and time is easily obtained. We illustrate our approach using a real dataset of monthly children asthma visits to hospital in the province of Manitoba, Canada, during the period April 2006 to March 2010. Performance of our approach is also evaluated through a simulation study. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    PubMed

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  7. Randomized trial of intermittent or continuous amnioinfusion for variable decelerations.

    PubMed

    Rinehart, B K; Terrone, D A; Barrow, J H; Isler, C M; Barrilleaux, P S; Roberts, W E

    2000-10-01

    To determine whether continuous or intermittent bolus amnioinfusion is more effective in relieving variable decelerations. Patients with repetitive variable decelerations were randomized to an intermittent bolus or continuous amnioinfusion. The intermittent bolus infusion group received boluses of 500 mL of normal saline, each over 30 minutes, with boluses repeated if variable decelerations recurred. The continuous infusion group received a bolus infusion of 500 mL of normal saline over 30 minutes and then 3 mL per minute until delivery occurred. The ability of the amnioinfusion to abolish variable decelerations was analyzed, as were maternal demographic and pregnancy outcome variables. Power analysis indicated that 64 patients would be required. Thirty-five patients were randomized to intermittent infusion and 30 to continuous infusion. There were no differences between groups in terms of maternal demographics, gestational age, delivery mode, neonatal outcome, median time to resolution of variable decelerations, or the number of times variable decelerations recurred. The median volume infused in the intermittent infusion group (500 mL) was significantly less than that in the continuous infusion group (905 mL, P =.003). Intermittent bolus amnioinfusion is as effective as continuous infusion in relieving variable decelerations in labor. Further investigation is necessary to determine whether either of these techniques is associated with increased occurrence of rare complications such as cord prolapse or uterine rupture.

  8. What Does It Mean to Do Something Randomly?

    ERIC Educational Resources Information Center

    Liu, Yating; Enderson, Mary C.

    2016-01-01

    A mysterious conflict of solutions emerged when a group of tenth- and eleventh-grade students were studying a seemingly ordinary problem on combination and probability. By investigating the mysterious "conflicts" caused by multiple randomization procedures, students will gain a deeper understanding of what it means to perform a task…

  9. Marginalized zero-altered models for longitudinal count data.

    PubMed

    Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A

    2016-10-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.

  10. Marginalized zero-altered models for longitudinal count data

    PubMed Central

    Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.

    2015-01-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423

  11. A review on models for count data with extra zeros

    NASA Astrophysics Data System (ADS)

    Zamri, Nik Sarah Nik; Zamzuri, Zamira Hasanah

    2017-04-01

    Typically, the zero inflated models are usually used in modelling count data with excess zeros. The existence of the extra zeros could be structural zeros or random which occur by chance. These types of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences. As found in the literature, the most popular zero inflated models used are zero inflated Poisson and zero inflated negative binomial. Recently, more complex models have been developed to account for overdispersion and unobserved heterogeneity. In addition, more extended distributions are also considered in modelling data with this feature. In this paper, we review related literature, provide a recent development and summary on models for count data with extra zeros.

  12. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Subharmonic response of a single-degree-of-freedom nonlinear vibro-impact system to a narrow-band random excitation.

    PubMed

    Haiwu, Rong; Wang, Xiangdong; Xu, Wei; Fang, Tong

    2009-08-01

    The subharmonic response of single-degree-of-freedom nonlinear vibro-impact oscillator with a one-sided barrier to narrow-band random excitation is investigated. The narrow-band random excitation used here is a filtered Gaussian white noise. The analysis is based on a special Zhuravlev transformation, which reduces the system to one without impacts, or velocity jumps, thereby permitting the applications of asymptotic averaging over the "fast" variables. The averaged stochastic equations are solved exactly by the method of moments for the mean-square response amplitude for the case of linear system with zero offset. A perturbation-based moment closure scheme is proposed and the formula of the mean-square amplitude is obtained approximately for the case of linear system with nonzero offset. The perturbation-based moment closure scheme is used once again to obtain the algebra equation of the mean-square amplitude of the response for the case of nonlinear system. The effects of damping, detuning, nonlinear intensity, bandwidth, and magnitudes of random excitations are analyzed. The theoretical analyses are verified by numerical results. Theoretical analyses and numerical simulations show that the peak amplitudes may be strongly reduced at large detunings or large nonlinear intensity.

  14. Zero Tolerance: Advantages and Disadvantages. Research Brief

    ERIC Educational Resources Information Center

    Walker, Karen

    2009-01-01

    What are the positives and negatives of zero tolerance? What should be considered when examining a school's program? Although there are no definitive definitions of zero tolerance, two commonly used ones are as follows: "Zero tolerance means that a school will automatically and severely punish a student for a variety of infractions" (American Bar…

  15. Interactions of Mean Climate Change and Climate Variability on Food Security Extremes

    NASA Technical Reports Server (NTRS)

    Ruane, Alexander C.; McDermid, Sonali; Mavromatis, Theodoros; Hudson, Nicholas; Morales, Monica; Simmons, John; Prabodha, Agalawatte; Ahmad, Ashfaq; Ahmad, Shakeel; Ahuja, Laj R.

    2015-01-01

    Recognizing that climate change will affect agricultural systems both through mean changes and through shifts in climate variability and associated extreme events, we present preliminary analyses of climate impacts from a network of 1137 crop modeling sites contributed to the AgMIP Coordinated Climate-Crop Modeling Project (C3MP). At each site sensitivity tests were run according to a common protocol, which enables the fitting of crop model emulators across a range of carbon dioxide, temperature, and water (CTW) changes. C3MP can elucidate several aspects of these changes and quantify crop responses across a wide diversity of farming systems. Here we test the hypothesis that climate change and variability interact in three main ways. First, mean climate changes can affect yields across an entire time period. Second, extreme events (when they do occur) may be more sensitive to climate changes than a year with normal climate. Third, mean climate changes can alter the likelihood of climate extremes, leading to more frequent seasons with anomalies outside of the expected conditions for which management was designed. In this way, shifts in climate variability can result in an increase or reduction of mean yield, as extreme climate events tend to have lower yield than years with normal climate.C3MP maize simulations across 126 farms reveal a clear indication and quantification (as response functions) of mean climate impacts on mean yield and clearly show that mean climate changes will directly affect the variability of yield. Yield reductions from increased climate variability are not as clear as crop models tend to be less sensitive to dangers on the cool and wet extremes of climate variability, likely underestimating losses from water-logging, floods, and frosts.

  16. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  17. Zero-gravity movement studies

    NASA Technical Reports Server (NTRS)

    Badler, N. I.; Fishwick, P.; Taft, N.; Agrawala, M.

    1985-01-01

    The use of computer graphics to simulate the movement of articulated animals and mechanisms has a number of uses ranging over many fields. Human motion simulation systems can be useful in education, medicine, anatomy, physiology, and dance. In biomechanics, computer displays help to understand and analyze performance. Simulations can be used to help understand the effect of external or internal forces. Similarly, zero-gravity simulation systems should provide a means of designing and exploring the capabilities of hypothetical zero-gravity situations before actually carrying out such actions. The advantage of using a simulation of the motion is that one can experiment with variations of a maneuver before attempting to teach it to an individual. The zero-gravity motion simulation problem can be divided into two broad areas: human movement and behavior in zero-gravity, and simulation of articulated mechanisms.

  18. Monthly means of selected climate variables for 1985 - 1989

    NASA Technical Reports Server (NTRS)

    Schubert, S.; Wu, C.-Y.; Zero, J.; Schemm, J.-K.; Park, C.-K.; Suarez, M.

    1992-01-01

    Meteorologists are accustomed to viewing instantaneous weather maps, since these contain the most relevant information for the task of producing short-range weather forecasts. Climatologists, on the other hand, tend to deal with long-term means, which portray the average climate. The recent emphasis on dynamical extended-range forecasting and, in particular measuring and predicting short term climate change makes it important that we become accustomed to looking at variations on monthly and longer time scales. A convenient toll for researchers to familiarize themselves with the variability which occurs in selected parameters on these time scales is provided. The format of the document was chosen to help facilitate the intercomparison of various parameters and highlight the year-to-year variability in monthly means.

  19. Use of allele scores as instrumental variables for Mendelian randomization

    PubMed Central

    Burgess, Stephen; Thompson, Simon G

    2013-01-01

    Background An allele score is a single variable summarizing multiple genetic variants associated with a risk factor. It is calculated as the total number of risk factor-increasing alleles for an individual (unweighted score), or the sum of weights for each allele corresponding to estimated genetic effect sizes (weighted score). An allele score can be used in a Mendelian randomization analysis to estimate the causal effect of the risk factor on an outcome. Methods Data were simulated to investigate the use of allele scores in Mendelian randomization where conventional instrumental variable techniques using multiple genetic variants demonstrate ‘weak instrument’ bias. The robustness of estimates using the allele score to misspecification (for example non-linearity, effect modification) and to violations of the instrumental variable assumptions was assessed. Results Causal estimates using a correctly specified allele score were unbiased with appropriate coverage levels. The estimates were generally robust to misspecification of the allele score, but not to instrumental variable violations, even if the majority of variants in the allele score were valid instruments. Using a weighted rather than an unweighted allele score increased power, but the increase was small when genetic variants had similar effect sizes. Naive use of the data under analysis to choose which variants to include in an allele score, or for deriving weights, resulted in substantial biases. Conclusions Allele scores enable valid causal estimates with large numbers of genetic variants. The stringency of criteria for genetic variants in Mendelian randomization should be maintained for all variants in an allele score. PMID:24062299

  20. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    ERIC Educational Resources Information Center

    Bancroft, Stacie L.; Bourret, Jason C.

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…

  1. Variable versus conventional lung protective mechanical ventilation during open abdominal surgery: study protocol for a randomized controlled trial.

    PubMed

    Spieth, Peter M; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo

    2014-05-02

    General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary function and reduces systemic inflammatory response. However, it is currently not known whether patients undergoing open abdominal surgery might benefit from intraoperative variable ventilation. The PROtective VARiable ventilation trial ('PROVAR') is a single center, randomized controlled trial enrolling 50 patients who are planning for open abdominal surgery expected to last longer than 3 hours. PROVAR compares conventional (non-variable) lung protective ventilation (CV) with variable lung protective ventilation (VV) regarding pulmonary function and inflammatory response. The primary endpoint of the study is the forced vital capacity on the first postoperative day. Secondary endpoints include further lung function tests, plasma cytokine levels, spatial distribution of ventilation assessed by means of electrical impedance tomography and postoperative pulmonary complications. We hypothesize that VV improves lung function and reduces systemic inflammatory response compared to CV in patients receiving mechanical ventilation during general anesthesia for open abdominal surgery longer than 3 hours. PROVAR is the first randomized controlled trial aiming at intra- and postoperative effects of VV on lung function. This study may help to define the role of VV during general anesthesia requiring mechanical ventilation. Clinicaltrials.gov NCT01683578 (registered on September 3 3012).

  2. Temporal framing and the hidden-zero effect: rate-dependent outcomes on delay discounting.

    PubMed

    Naudé, Gideon P; Kaplan, Brent A; Reed, Derek D; Henley, Amy J; DiGennaro Reed, Florence D

    2018-05-01

    Recent research suggests that presenting time intervals as units (e.g., days) or as specific dates, can modulate the degree to which humans discount delayed outcomes. Another framing effect involves explicitly stating that choosing a smaller-sooner reward is mutually exclusive to receiving a larger-later reward, thus presenting choices as an extended sequence. In Experiment 1, participants (N = 201) recruited from Amazon Mechanical Turk completed the Monetary Choice Questionnaire in a 2 (delay framing) by 2 (zero framing) design. Regression suggested a main effect of delay, but not zero, framing after accounting for other demographic variables and manipulations. We observed a rate-dependent effect for the date-framing group, such that those with initially steep discounting exhibited greater sensitivity to the manipulation than those with initially shallow discounting. Subsequent analyses suggest these effects cannot be explained by regression to the mean. Experiment 2 addressed the possibility that the null effect of zero framing was due to within-subject exposure to the hidden- and explicit-zero conditions. A new Amazon Mechanical Turk sample completed the Monetary Choice Questionnaire in either hidden- or explicit-zero formats. Analyses revealed a main effect of reward magnitude, but not zero framing, suggesting potential limitations to the generality of the hidden-zero effect. © 2018 Society for the Experimental Analysis of Behavior.

  3. Limits on relief through constrained exchange on random graphs

    NASA Astrophysics Data System (ADS)

    LaViolette, Randall A.; Ellebracht, Lory A.; Gieseler, Charles J.

    2007-09-01

    Agents are represented by nodes on a random graph (e.g., “small world”). Each agent is endowed with a zero-mean random value that may be either positive or negative. All agents attempt to find relief, i.e., to reduce the magnitude of that initial value, to zero if possible, through exchanges. The exchange occurs only between the agents that are linked, a constraint that turns out to dominate the results. The exchange process continues until Pareto equilibrium is achieved. Only 40-90% of the agents achieved relief on small-world graphs with mean degree between 2 and 40. Even fewer agents achieved relief on scale-free-like graphs with a truncated power-law degree distribution. The rate at which relief grew with increasing degree was slow, only at most logarithmic for all of the graphs considered; viewed in reverse, the fraction of nodes that achieve relief is resilient to the removal of links.

  4. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    PubMed

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  5. Random matrices and condensation into multiple states

    NASA Astrophysics Data System (ADS)

    Sadeghi, Sina; Engel, Andreas

    2018-03-01

    In the present work, we employ methods from statistical mechanics of disordered systems to investigate static properties of condensation into multiple states in a general framework. We aim at showing how typical properties of random interaction matrices play a vital role in manifesting the statistics of condensate states. In particular, an analytical expression for the fraction of condensate states in the thermodynamic limit is provided that confirms the result of the mean number of coexisting species in a random tournament game. We also study the interplay between the condensation problem and zero-sum games with correlated random payoff matrices.

  6. Spatial vs. individual variability with inheritance in a stochastic Lotka-Volterra system

    NASA Astrophysics Data System (ADS)

    Dobramysl, Ulrich; Tauber, Uwe C.

    2012-02-01

    We investigate a stochastic spatial Lotka-Volterra predator-prey model with randomized interaction rates that are either affixed to the lattice sites and quenched, and / or specific to individuals in either population. In the latter situation, we include rate inheritance with mutations from the particles' progenitors. Thus we arrive at a simple model for competitive evolution with environmental variability and selection pressure. We employ Monte Carlo simulations in zero and two dimensions to study the time evolution of both species' densities and their interaction rate distributions. The predator and prey concentrations in the ensuing steady states depend crucially on the environmental variability, whereas the temporal evolution of the individualized rate distributions leads to largely neutral optimization. Contrary to, e.g., linear gene expression models, this system does not experience fixation at extreme values. An approximate description of the resulting data is achieved by means of an effective master equation approach for the interaction rate distribution.

  7. Zero potential vorticity envelopes for the zonal-mean velocity of the Venus/Titan atmospheres

    NASA Technical Reports Server (NTRS)

    Allison, Michael; Del Genio, Anthony D.; Zhou, Wei

    1994-01-01

    The diagnostic analysis of numerical simulations of the Venus/Titan wind regime reveals an overlooked constraint upon the latitudinal structure of their zonal-mean angular momentum. The numerical experiments, as well as the limited planetary observations, are approximately consistent with the hypothesis that within the latitudes bounded by the wind maxima the total Ertel potential vorticity associated with the zonal-mean motion is approximately well mixed with respect to the neutral equatorial value for a stable circulation. The implied latitudinal profile of angular momentum is of the form M equal to or less than M(sub e)(cos lambda)(exp 2/Ri), where lambda is the latitude and Ri the local Richardson number, generally intermediate between the two extremes of uniform angular momentum (Ri approaches infinity) and uniform angular velocity (Ri = 1). The full range of angular momentum profile variation appears to be realized within the observed meridional - vertical structure of the Venus atmosphere, at least crudely approaching the implied relationship between stratification and zonal velocity there. While not itself indicative of a particular eddy mechanism or specific to atmospheric superrotation, the zero potential vorticity (ZPV) constraint represents a limiting bound for the eddy - mean flow adjustment of a neutrally stable baroclinic circulation and may be usefully applied to the diagnostic analysis of future remote sounding and in situ measurements from planetary spacecraft.

  8. On the distribution of a product of N Gaussian random variables

    NASA Astrophysics Data System (ADS)

    Stojanac, Željka; Suess, Daniel; Kliesch, Martin

    2017-08-01

    The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.

  9. Hurdle models for multilevel zero-inflated data via h-likelihood.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2010-12-30

    Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Global mean first-passage times of random walks on complex networks.

    PubMed

    Tejedor, V; Bénichou, O; Voituriez, R

    2009-12-01

    We present a general framework, applicable to a broad class of random walks on complex networks, which provides a rigorous lower bound for the mean first-passage time of a random walker to a target site averaged over its starting position, the so-called global mean first-passage time (GMFPT). This bound is simply expressed in terms of the equilibrium distribution at the target and implies a minimal scaling of the GMFPT with the network size. We show that this minimal scaling, which can be arbitrarily slow, is realized under the simple condition that the random walk is transient at the target site and independently of the small-world, scale-free, or fractal properties of the network. Last, we put forward that the GMFPT to a specific target is not a representative property of the network since the target averaged GMFPT satisfies much more restrictive bounds.

  11. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  12. Socio-economic variables influencing mean age at marriage in Karnataka and Kerala.

    PubMed

    Prakasam, C P; Upadhyay, R B

    1985-01-01

    "In this paper an attempt was made to study the influence of certain socio-economic variables on the male and the female age at marriage in Karnataka and Kerala [India] for the year 1971. Step-wise regression method has been used to select the predictor variables influencing mean age at marriage. The results reveal that percent female literate...and percent female in labour force...are found to influence female mean age at marriage in Kerala, while the variables for Karnataka were percent female literate..., percent male literate..., and percent urban male population...." excerpt

  13. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  14. Divided dosing reduces prednisolone-induced hyperglycaemia and glycaemic variability: a randomized trial after kidney transplantation.

    PubMed

    Yates, Christopher J; Fourlanos, Spiros; Colman, Peter G; Cohney, Solomon J

    2014-03-01

    Prednisolone is a major risk factor for hyperglycaemia and new-onset diabetes after transplantation. Uncontrolled observational data suggest that divided dosing may reduce requirements for hypoglycaemic agents. This study aims to compare the glycaemic effects of divided twice daily (BD) and once daily (QD) prednisolone. Twenty-two kidney transplant recipients without diabetes were randomized to BD or QD prednisolone. Three weeks post-transplant, a continuous glucose monitor (iPro2(®) Medtronic) was applied for 5 days with subjects continuing their initial prednisolone regimen (Days 1-2) before crossover to the alternative regimen. Mean glucose, peak glucose, nadir glucose, exposure to hyperglycaemia (glucose ≥7.8 mmol/L) and glycaemic variability were assessed. The mean ± standard deviation (SD) age of subjects was 50 ± 10 years and 77% were male. Median (interquartile range) daily prednisolone dose was 25 (20, 25) mg. BD prednisolone was associated with decreased mean glucose (mean 7.9 ± 1.7 versus 8.1 ± 2.3 mmol/L, P < 0.001), peak glucose [median 10.4 (9.5, 11.4) versus 11.4 (10.3, 13.4) mmol/L, P< 0.001] and exposure to hyperglycaemia [median 25.5 (14.6, 30.3) versus 40.4 (33.2, 51.2) mmol/L/h, P = 0.003]. Median glucose peaked between 14:55-15.05 h with BD and 15:25-15:30 h with QD. Median glycaemic variability scores were decreased with BD: SD (1.1 versus 1.9, P < 0.001), mean amplitude of glycaemic excursion (1.5 versus 2.2, P = 0.001), continuous overlapping net glycaemic action-1 (CONGA-1; 1.0 versus 1.2, P = 0.039), CONGA-2 (1.2 versus 1.4, P = 0.008) and J-index (25 versus 31, P = 0.003). Split prednisolone dosing reduces glycaemic variability and hyperglycaemia early post-kidney transplant.

  15. Mean dyadic Green's function for a two layer random medium

    NASA Technical Reports Server (NTRS)

    Zuniga, M. A.

    1981-01-01

    The mean dyadic Green's function for a two-layer random medium with arbitrary three-dimensional correlation functions has been obtained with the zeroth-order solution to the Dyson equation by applying the nonlinear approximation. The propagation of the coherent wave in the random medium is similar to that in an anisotropic medium with different propagation constants for the characteristic transverse electric and transverse magnetic polarizations. In the limit of a laminar structure, two propagation constants for each polarization are found to exist.

  16. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  17. Random dopant fluctuations and statistical variability in n-channel junctionless FETs

    NASA Astrophysics Data System (ADS)

    Akhavan, N. D.; Umana-Membreno, G. A.; Gu, R.; Antoszewski, J.; Faraone, L.

    2018-01-01

    The influence of random dopant fluctuations on the statistical variability of the electrical characteristics of n-channel silicon junctionless nanowire transistor (JNT) has been studied using three dimensional quantum simulations based on the non-equilibrium Green’s function (NEGF) formalism. Average randomly distributed body doping densities of 2 × 1019, 6 × 1019 and 1 × 1020 cm-3 have been considered employing an atomistic model for JNTs with gate lengths of 5, 10 and 15 nm. We demonstrate that by properly adjusting the doping density in the JNT, a near ideal statistical variability and electrical performance can be achieved, which can pave the way for the continuation of scaling in silicon CMOS technology.

  18. Origin and implications of zero degeneracy in networks spectra.

    PubMed

    Yadav, Alok; Jalan, Sarika

    2015-04-01

    The spectra of many real world networks exhibit properties which are different from those of random networks generated using various models. One such property is the existence of a very high degeneracy at the zero eigenvalue. In this work, we provide all the possible reasons behind the occurrence of the zero degeneracy in the network spectra, namely, the complete and partial duplications, as well as their implications. The power-law degree sequence and the preferential attachment are the properties which enhances the occurrence of such duplications and hence leading to the zero degeneracy. A comparison of the zero degeneracy in protein-protein interaction networks of six different species and in their corresponding model networks indicates importance of the degree sequences and the power-law exponent for the occurrence of zero degeneracy.

  19. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  20. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  1. [Random Variable Read Me File

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Sankararaman, Shankar; Cullo, Aiden

    2017-01-01

    Readme for the Random Variable Toolbox usable manner. is a Web-based Git version control repository hosting service. It is mostly used for computer code. It offers all of the distributed version control and source code management (SCM) functionality of Git as well as adding its own features. It provides access control and several collaboration features such as bug tracking, feature requests, task management, and wikis for every project.[3] GitHub offers both plans for private and free repositories on the same account[4] which are commonly used to host open-source software projects.[5] As of April 2017, GitHub reports having almost 20 million users and 57 million repositories,[6] making it the largest host of source code in the world.[7] GitHub has a mascot called Octocat, a cat with five tentacles and a human-like face

  2. A Hedonic Approach to Estimating Software Cost Using Ordinary Least Squares Regression and Nominal Attribute Variables

    DTIC Science & Technology

    2006-03-01

    included zero, there is insufficient evidence to indicate that the error mean is 35 not zero. The Breusch - Pagan test was used to test the constant...Multicollinearity .............................................................................. 33 Testing OLS Assumptions...programming styles used by developers (Stamelos and others, 2003:733). Kemerer tested to see how models utilizing SLOC as an independent variable

  3. Zero-crossing statistics for non-Markovian time series.

    PubMed

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  4. Zero-crossing statistics for non-Markovian time series

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  5. Applying the zero-inflated Poisson model with random effects to detect abnormal rises in school absenteeism indicating infectious diseases outbreak.

    PubMed

    Song, X X; Zhao, Q; Tao, T; Zhou, C M; Diwan, V K; Xu, B

    2018-05-30

    Records of absenteeism from primary schools are valuable data for infectious diseases surveillance. However, the analysis of the absenteeism is complicated by the data features of clustering at zero, non-independence and overdispersion. This study aimed to generate an appropriate model to handle the absenteeism data collected in a European Commission granted project for infectious disease surveillance in rural China and to evaluate the validity and timeliness of the resulting model for early warnings of infectious disease outbreak. Four steps were taken: (1) building a 'well-fitting' model by the zero-inflated Poisson model with random effects (ZIP-RE) using the absenteeism data from the first implementation year; (2) applying the resulting model to predict the 'expected' number of absenteeism events in the second implementation year; (3) computing the differences between the observations and the expected values (O-E values) to generate an alternative series of data; (4) evaluating the early warning validity and timeliness of the observational data and model-based O-E values via the EARS-3C algorithms with regard to the detection of real cluster events. The results indicate that ZIP-RE and its corresponding O-E values could improve the detection of aberrations, reduce the false-positive signals and are applicable to the zero-inflated data.

  6. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  7. The Effects of Including Observed Means or Latent Means as Covariates in Multilevel Models for Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Aydin, Burak; Leite, Walter L.; Algina, James

    2016-01-01

    We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…

  8. Projected Changes in Mean and Interannual Variability of Surface Water over Continental China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, Guoyong; Tang, Qiuhong; Huang, Maoyi

    Five General Circulation Model (GCM) climate projections under the RCP8.5 emission scenario were used to drive the Variable Infiltration Capacity (VIC) hydrologic model to investigate the impacts of climate change on hydrologic cycle over continental China in the 21st century. The bias-corrected climatic variables were generated for the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) by the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP). Results showed much larger fractional changes of annual mean Evaportranspiration (ET) per unit warming than the corresponding fractional changes of Precipitation (P) per unit warming across the country especially for South China,more » which led to notable decrease of surface water variability (P-E). Specifically, negative trends for annual mean runoff up to -0.33%/decade and soil moisture trends varying between -0.02 to -0.13%/decade were found for most river basins across China. Coincidentally, interannual variability for both runoff and soil moisture exhibited significant positive trends for almost all river basins across China, implying an increase in extremes relative to the mean conditions. Noticeably, the largest positive trends for runoff variability and soil moisture variability, which were up to 38 0.41%/decade and 0.90%/decade, both occurred in Southwest China. In addition to the regional contrast, intra-seasonal variation was also large for the runoff mean and runoff variability changes, but small for the soil moisture mean and variability changes. Our results suggest that future climate change could further exacerbate existing water-related risks (e.g. floods and droughts) across China as indicated by the marked decrease of surface water amounts combined with steady increase of interannual variability throughout the 21st century. This study highlights the regional contrast and intra-seasonal variations for the projected hydrologic changes and could provide muti

  9. Analog model for quantum gravity effects: phonons in random fluids.

    PubMed

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  10. Obtaining orthotropic elasticity tensor using entries zeroing method.

    NASA Astrophysics Data System (ADS)

    Gierlach, Bartosz; Danek, Tomasz

    2017-04-01

    A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal

  11. Confidence intervals for a difference between lognormal means in cluster randomization trials.

    PubMed

    Poirier, Julia; Zou, G Y; Koval, John

    2017-04-01

    Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.

  12. Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.

    PubMed

    Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J

    2017-06-01

    Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.

  13. Zero-gravity aerosol behavior

    NASA Technical Reports Server (NTRS)

    Edwards, H. W.

    1981-01-01

    The feasibility and scientific benefits of a zero gravity aerosol study in an orbiting laboratory were examined. A macroscopic model was devised to deal with the simultaneous effects of diffusion and coagulation of particles in the confined aerosol. An analytical solution was found by treating the particle coagulation and diffusion constants as ensemble parameters and employing a transformation of variables. The solution was used to carry out simulated zero gravity aerosol decay experiments in a compact cylindrical chamber. The results demonstrate that the limitations of physical space and time imposed by the orbital situation are not prohibitive in terms of observing the history of an aerosol confined under zero gravity conditions. While the absence of convective effects would be a definite benefit for the experiment, the mathematical complexity of the problem is not greatly reduced when the gravitational term drops out of the equation. Since the model does not deal directly with the evolution of the particle size distribution, it may be desirable to develop more detailed models before undertaking an orbital experiment.

  14. A randomized pilot study comparing zero-calorie alternate-day fasting to daily caloric restriction in adults with obesity

    PubMed Central

    Catenacci, Victoria A.; Pan, Zhaoxing; Ostendorf, Danielle; Brannon, Sarah; Gozansky, Wendolyn S.; Mattson, Mark P.; Martin, Bronwen; MacLean, Paul S.; Melanson, Edward L.; Donahoo, William Troy

    2016-01-01

    Objective To evaluate the safety and tolerability of alternate-day fasting (ADF) and to compare changes in weight, body composition, lipids, and insulin sensitivity index (Si) to those produced by a standard weight loss diet, moderate daily caloric restriction (CR). Methods Adults with obesity (BMI ≥30 kg/m2, age 18-55) were randomized to either zero-calorie ADF (n=14) or CR (-400 kcal/day, n=12) for 8 weeks. Outcomes were measured at the end of the 8-week intervention and after 24 weeks of unsupervised follow-up. Results No adverse effects were attributed to ADF and 93% completed the 8-week ADF protocol. At 8 weeks, ADF achieved a 376 kcal/day greater energy deficit, however there were no significant between-group differences in change in weight (mean±SE; ADF -8.2±0.9 kg, CR -7.1±1.0 kg), body composition, lipids, or Si. After 24 weeks of unsupervised follow-up, there were no significant differences in weight regain, however changes from baseline in % fat mass and lean mass were more favorable in ADF. Conclusions ADF is a safe and tolerable approach to weight loss. ADF produced similar changes in weight, body composition, lipids and Si at 8 weeks and did not appear to increase risk for weight regain 24 weeks after completing the intervention. PMID:27569118

  15. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  16. U. S. goal: zero energy growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCulla, J.

    Commentary:as envisioned by the ford foundation's energy policy project, zero energy growth would not mean austerity, but a better living standard for everyone. With sufficient incentive, industry could cut energy demand by 10-15% by 1980. Upgraded federal housing admin. standards for new dwellings could require more insulation. Electric heat, an energy waster of growing prominence, should be curbed. The logic in federal support of zero economic growth is defined.

  17. Number statistics for β-ensembles of random matrices: Applications to trapped fermions at zero temperature.

    PubMed

    Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo

    2016-09-01

    Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.

  18. Magnetic zero-modes, vortices and Cartan geometry

    NASA Astrophysics Data System (ADS)

    Ross, Calum; Schroers, Bernd J.

    2018-04-01

    We exhibit a close relation between vortex configurations on the 2-sphere and magnetic zero-modes of the Dirac operator on R^3 which obey an additional nonlinear equation. We show that both are best understood in terms of the geometry induced on the 3-sphere via pull-back of the round geometry with bundle maps of the Hopf fibration. We use this viewpoint to deduce a manifestly smooth formula for square-integrable magnetic zero-modes in terms of two homogeneous polynomials in two complex variables.

  19. Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Landini, Luigi; Santarelli, Maria Filomena

    2016-07-01

    To develop a 3D sampling strategy based on a stack of variable density spirals for compressive sensing MRI. A random sampling pattern was obtained by rotating each spiral by a random angle and by delaying for few time steps the gradient waveforms of the different interleaves. A three-dimensional (3D) variable sampling density was obtained by designing different variable density spirals for each slice encoding. The proposed approach was tested with phantom simulations up to a five-fold undersampling factor. Fully sampled 3D dataset of a human knee, and of a human brain, were obtained from a healthy volunteer. The proposed approach was tested with off-line reconstructions of the knee dataset up to a four-fold acceleration and compared with other noncoherent trajectories. The proposed approach outperformed the standard stack of spirals for various undersampling factors. The level of coherence and the reconstruction quality of the proposed approach were similar to those of other trajectories that, however, require 3D gridding for the reconstruction. The variable density randomized stack of spirals (VDR-SoS) is an easily implementable trajectory that could represent a valid sampling strategy for 3D compressive sensing MRI. It guarantees low levels of coherence without requiring 3D gridding. Magn Reson Med 76:59-69, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Meta-Analysis of Zero or Near-Zero Fluoroscopy Use During Ablation of Cardiac Arrhythmias.

    PubMed

    Yang, Li; Sun, Ge; Chen, Xiaomei; Chen, Guangzhi; Yang, Shanshan; Guo, Ping; Wang, Yan; Wang, Dao Wen

    2016-11-15

    Data regarding the efficacy and safety of zero or near-zero fluoroscopic ablation of cardiac arrhythmias are limited. A literature search was conducted using PubMed and Embase for relevant studies through January 2016. Ten studies involving 2,261 patients were identified. Compared with conventional radiofrequency ablation method, zero or near-zero fluoroscopy ablation significantly showed reduced fluoroscopic time (standard mean difference [SMD] -1.62, 95% CI -2.20 to -1.05; p <0.00001), ablation time (SMD -0.16, 95% CI -0.29 to -0.04; p = 0.01), and radiation dose (SMD -1.94, 95% CI -3.37 to -0.51; p = 0.008). In contrast, procedure duration was not significantly different from that of conventional radiofrequency ablation (SMD -0.03, 95% CI -0.16 to 0.09; p = 0.58). There were no significant differences between both groups in immediate success rate (odds ratio [OR] 0.99, 95% CI 0.49 to 2.01; p = 0.99), long-term success rate (OR 1.13, 95% CI 0.42 to 3.02; p = 0.81), complication rates (OR 0.98, 95% CI 0.49 to 1.96; p = 0.95), and recurrence rates (OR 1.29, 95% CI 0.74 to 2.24; p = 0.37). In conclusion, radiation was significantly reduced in the zero or near-zero fluoroscopy ablation groups without compromising efficacy and safety. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Mean first passage time for random walk on dual structure of dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong; Zhou, Shuigeng

    2014-12-01

    The random walk approach has recently been widely employed to study the relations between the underlying structure and dynamic of complex systems. The mean first-passage time (MFPT) for random walks is a key index to evaluate the transport efficiency in a given system. In this paper we study analytically the MFPT in a dual structure of dendrimer network, Husimi cactus, which has different application background and different structure (contains loops) from dendrimer. By making use of the iterative construction, we explicitly determine both the partial mean first-passage time (PMFT, the average of MFPTs to a given target) and the global mean first-passage time (GMFT, the average of MFPTs over all couples of nodes) on Husimi cactus. The obtained closed-form results show that PMFPT and EMFPT follow different scaling with the network order, suggesting that the target location has essential influence on the transport efficiency. Finally, the impact that loop structure could bring is analyzed and discussed.

  2. Neutrophil chemotaxis in sickle cell anaemia, sickle cell beta zero thalassaemia, and after splenectomy.

    PubMed Central

    Donadi, E A; Falcão, R P

    1987-01-01

    Neutrophil chemotaxis was evaluated in 28 patients with sickle cell anaemia, 10 patient with sickle cell beta zero thalassaemia, 25 patients who had undergone splenectomy, and 38 controls. The mean distance migrated by patients' neutrophils was not significantly different from that of neutrophils from controls. Although several immunological variables have been reported to be changed after loss of splenic function, we were unable to show a defect in neutrophil chemotaxis that could account for the increased susceptibility to infection. PMID:3611395

  3. Zero-field random-field effect in diluted triangular lattice antiferromagnet CuFe1-xAlxO2

    NASA Astrophysics Data System (ADS)

    Nakajima, T.; Mitsuda, S.; Kitagawa, K.; Terada, N.; Komiya, T.; Noda, Y.

    2007-04-01

    We performed neutron scattering experiments on a diluted triangular lattice antiferromagnet (TLA), CuFe1-xAlxO2 with x = 0.10. The detailed analysis of the scattering profiles revealed that the scattering function of magnetic reflection is described as the sum of a Lorentzian term and a Lorentzian-squared term with anisotropic width. The Lorentzian-squared term dominating at low temperature is indicative of the domain state in the prototypical random-field Ising model. Taking account of the sinusoidally amplitude-modulated magnetic structure with incommensurate wavenumber in CuFe1-xAlxO2 with x = 0.10, we conclude that the effective random field arises even at zero field, owing to the combination of site-random magnetic vacancies and the sinusoidal structure that is regarded as a partially disordered (PD) structure in a wide sense, as reported in the typical three-sublattice PD phase of a diluted Ising TLA, CsCo0.83Mg0.17Br3 (van Duijn et al 2004 Phys. Rev. Lett. 92 077202). While the previous study revealed the existence of a domain state in CsCo0.83Mg0.17Br3 by detecting magnetic reflections specific to the spin configuration near the domain walls, our present study revealed the existence of a domain state in CuFe1-xAlxO2 (x = 0.10) by determination of the functional form of the scattering function.

  4. Prediction of endocrine stress reactions by means of personality variables.

    PubMed

    de Leeuwe, J N; Hentschel, U; Tavenier, R; Edelbroek, P

    1992-06-01

    The study examined the predictability of endocrine stress indicators on the basis of personality measures. The subjects were 83 computer operators (63 men, 20 women; mean age 28 years) who by means of an experimental situation were confronted with a mild stressor (a cognitive two-channel task with a high information load). Using scores on personality questionnaires (comprising scales for defense mechanisms, neuroticism, and 2 achievement motivation variables), subjects were classified into extreme groups of stress-resistant (17 subjects) versus nonstress-resistant (13 subjects). Immediately after the experiment blood samples were taken to assay the norepinephrine metabolites plasma-free 3-methoxy-4-hydroxy-phenylglycol (MHPG) and MHPG sulfate (MHPG.SO4), which formed the dependent variables. Personality measures and endocrine stress indicators were until the final analysis of the data kept apart by a double-blind strategy. A significant difference was noted in the MHPG level between the stress-resistant and the nonstress-resistant group. The value and applicability of these results for stress prevention is discussed.

  5. A zero waste vision for industrial networks in Europe.

    PubMed

    Curran, T; Williams, I D

    2012-03-15

    'ZeroWIN' (Towards Zero Waste in Industrial Networks--www.zerowin.eu) is a five year project running 2009-2014, funded by the EC under the 7th Framework Programme. Project ZeroWIN envisions industrial networks that have eliminated the wasteful consumption of resources. Zero waste is a unifying concept for a range of measures aimed at eliminating waste and challenging old ways of thinking. Aiming for zero waste will mean viewing waste as a potential resource with value to be realised, rather than as a problem to be dealt with. The ZeroWIN project will investigate and demonstrate how existing approaches and tools can be improved and combined to best effect in an industrial network, and how innovative technologies can contribute to achieving the zero waste vision. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    ERIC Educational Resources Information Center

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  7. Beyond anti-Muslim sentiment: opposing the Ground Zero mosque as a means to pursuing a stronger America.

    PubMed

    Jia, Lile; Karpen, Samuel C; Hirt, Edward R

    2011-10-01

    Americans' opposition toward building an Islamic community center at Ground Zero has been attributed solely to a general anti-Muslim sentiment. We hypothesized that some Americans' negative reaction was also due to their motivation to symbolically pursue a positive U.S. group identity, which had suffered from a concurrent economic and political downturn. Indeed, when participants perceived that the United States was suffering from lowered international status, those who identified strongly with the country, as evidenced especially by a high respect or deference for group symbols, reported a stronger opposition to the "Ground Zero mosque" than participants who identified weakly with the country did. Furthermore, participants who identified strongly with the country also showed a greater preference for buildings that were symbolically congruent than for buildings that were symbolically incongruent with the significance of Ground Zero, and they represented Ground Zero with a larger symbolic size. These findings suggest that identifying group members' underlying motivations provides unusual insights for understanding intergroup conflict.

  8. Improving Learning in Primary Schools of Developing Countries: A Meta-Analysis of Randomized Experiments

    ERIC Educational Resources Information Center

    McEwan, Patrick J.

    2015-01-01

    I gathered 77 randomized experiments (with 111 treatment arms) that evaluated the effects of school-based interventions on learning in developing-country primary schools. On average, monetary grants and deworming treatments had mean effect sizes that were close to zero and not statistically significant. Nutritional treatments, treatments that…

  9. Sums and Products of Jointly Distributed Random Variables: A Simplified Approach

    ERIC Educational Resources Information Center

    Stein, Sheldon H.

    2005-01-01

    Three basic theorems concerning expected values and variances of sums and products of random variables play an important role in mathematical statistics and its applications in education, business, the social sciences, and the natural sciences. A solid understanding of these theorems requires that students be familiar with the proofs of these…

  10. Zero-inflated modeling of fish catch per unit area resulting from multiple gears: Application to channel catfish and shovelnose sturgeon in the Missouri River

    USGS Publications Warehouse

    Arab, A.; Wildhaber, M.L.; Wikle, C.K.; Gentry, C.N.

    2008-01-01

    Fisheries studies often employ multiple gears that result in large percentages of zero values. We considered a zero-inflated Poisson (ZIP) model with random effects to address these excessive zeros. By employing a Bayesian ZIP model that simultaneously incorporates data from multiple gears to analyze data from the Missouri River, we were able to compare gears and make more year, segment, and macrohabitat comparisons than did the original data analysis. For channel catfish Ictalurus punctatus, our results rank (highest to lowest) the mean catch per unit area (CPUA) for gears (beach seine, benthic trawl, electrofishing, and drifting trammel net); years (1998 and 1997); macrohabitats (tributary mouth, connected secondary channel, nonconnected secondary channel, and bend); and river segment zones (channelized, inter-reservoir, and least-altered). For shovelnose sturgeon Scaphirhynchus platorynchus, the mean CPUA was significantly higher for benthic trawls and drifting trammel nets; 1998 and 1997; tributary mouths, bends, and connected secondary channels; and some channelized or least-altered inter-reservoir segments. One important advantage of our approach is the ability to reliably infer patterns of relative abundance by means of multiple gears without using gear efficiencies. ?? Copyright by the American Fisheries Society 2008.

  11. Atherosclerotic Plaque in Patients with Zero Calcium Score at Coronary Computed Tomography Angiography

    PubMed Central

    Gabriel, Fabíola Santos; Gonçalves, Luiz Flávio Galvão; de Melo, Enaldo Vieira; Sousa, Antônio Carlos Sobral; Pinto, Ibraim Masciarelli Francisco; Santana, Sara Melo Macedo; de Matos, Carlos José Oliveira; Souto, Maria Júlia Silveira; Conceição, Flávio Mateus do Sacramento; Oliveira, Joselina Luzia Menezes

    2018-01-01

    Background In view of the high mortality for cardiovascular diseases, it has become necessary to stratify the main risk factors and to choose the correct diagnostic modality. Studies have demonstrated that a zero calcium score (CS) is characteristic of a low risk for cardiovascular events. However, the prevalence of individuals with coronary atherosclerotic plaques and zero CS is conflicting in the specialized literature. Objective To evaluate the frequency of patients with coronary atherosclerotic plaques, their degree of obstruction and associated factors in patients with zero CS and indication for coronary computed tomography angiography (CCTA). Methods This is a cross-sectional, prospective study with 367 volunteers with zero CS at CCTA in four diagnostic imaging centers in the period from 2011 to 2016. A significance level of 5% and 95% confidence interval were adopted. Results The frequency of atherosclerotic plaque in the coronary arteries in 367 patients with zero CS was 9.3% (34 individuals). In this subgroup, mean age was 52 ± 10 years, 18 (52.9%) were women and 16 (47%) had significant coronary obstructions (> 50%), with involvement of two or more segments in 4 (25%) patients. The frequency of non-obese individuals (90.6% vs 73.9%, p = 0.037) and alcohol drinkers (55.9% vs 34.8%, p = 0.015) was significantly higher in patients with atherosclerotic plaques, with an odds ratio of 3.4 for each of this variable. Conclusions The frequency of atherosclerotic plaque with zero CS was relatively high, indicating that the absence of calcification does not exclude the presence of plaques, many of which obstructive, especially in non-obese subjects and alcohol drinkers. PMID:29723329

  12. Atherosclerotic Plaque in Patients with Zero Calcium Score at Coronary Computed Tomography Angiography.

    PubMed

    Gabriel, Fabíola Santos; Gonçalves, Luiz Flávio Galvão; Melo, Enaldo Vieira de; Sousa, Antônio Carlos Sobral; Pinto, Ibraim Masciarelli Francisco; Santana, Sara Melo Macedo; Matos, Carlos José Oliveira de; Souto, Maria Júlia Silveira; Conceição, Flávio Mateus do Sacramento; Oliveira, Joselina Luzia Menezes

    2018-05-03

    In view of the high mortality for cardiovascular diseases, it has become necessary to stratify the main risk factors and to choose the correct diagnostic modality. Studies have demonstrated that a zero calcium score (CS) is characteristic of a low risk for cardiovascular events. However, the prevalence of individuals with coronary atherosclerotic plaques and zero CS is conflicting in the specialized literature. To evaluate the frequency of patients with coronary atherosclerotic plaques, their degree of obstruction and associated factors in patients with zero CS and indication for coronary computed tomography angiography (CCTA). This is a cross-sectional, prospective study with 367 volunteers with zero CS at CCTA in four diagnostic imaging centers in the period from 2011 to 2016. A significance level of 5% and 95% confidence interval were adopted. The frequency of atherosclerotic plaque in the coronary arteries in 367 patients with zero CS was 9.3% (34 individuals). In this subgroup, mean age was 52 ± 10 years, 18 (52.9%) were women and 16 (47%) had significant coronary obstructions (> 50%), with involvement of two or more segments in 4 (25%) patients. The frequency of non-obese individuals (90.6% vs 73.9%, p = 0.037) and alcohol drinkers (55.9% vs 34.8%, p = 0.015) was significantly higher in patients with atherosclerotic plaques, with an odds ratio of 3.4 for each of this variable. The frequency of atherosclerotic plaque with zero CS was relatively high, indicating that the absence of calcification does not exclude the presence of plaques, many of which obstructive, especially in non-obese subjects and alcohol drinkers.

  13. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  14. Nonlinear system guidance in the presence of transmission zero dynamics

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Hunt, L. R.; Su, R.

    1995-01-01

    An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.

  15. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  16. Random one-of-N selector

    DOEpatents

    Kronberg, J.W.

    1993-04-20

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  17. Random one-of-N selector

    DOEpatents

    Kronberg, James W.

    1993-01-01

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  18. The paradoxical zero reflection at zero energy

    NASA Astrophysics Data System (ADS)

    Ahmed, Zafar; Sharma, Vibhu; Sharma, Mayank; Singhal, Ankush; Kaiwart, Rahul; Priyadarshini, Pallavi

    2017-03-01

    Usually, the reflection probability R(E) of a particle of zero energy incident on a potential which converges to zero asymptotically is found to be 1: R(0)=1. But earlier, a paradoxical phenomenon of zero reflection at zero energy (R(0)=0) has been revealed as a threshold anomaly. Extending the concept of half-bound state (HBS) of 3D, here we show that in 1D when a symmetric (asymmetric) attractive potential well possesses a zero-energy HBS, R(0)=0 (R(0)\\ll 1). This can happen only at some critical values q c of an effective parameter q of the potential well in the limit E\\to {0}+. We demonstrate this critical phenomenon in two simple analytically solvable models: square and exponential wells. However, in numerical calculations, even for these two models R(0)=0 is observed only as extrapolation to zero energy from low energies, close to a precise critical value q c. By numerical investigation of a variety of potential wells, we conclude that for a given potential well (symmetric or asymmetric), we can adjust the effective parameter q to have a low reflection at a low energy.

  19. Zero adjusted models with applications to analysing helminths count data.

    PubMed

    Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N

    2014-11-27

    It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.

  20. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    PubMed

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  1. Dynamic stability of spinning pretwisted beams subjected to axial random forces

    NASA Astrophysics Data System (ADS)

    Young, T. H.; Gau, C. Y.

    2003-11-01

    This paper studies the dynamic stability of a pretwisted cantilever beam spinning along its longitudinal axis and subjected to an axial random force at the free end. The axial force is assumed as the sum of a constant force and a random process with a zero mean. Due to this axial force, the beam may experience parametric random instability. In this work, the finite element method is first applied to yield discretized system equations. The stochastic averaging method is then adopted to obtain Ito's equations for the response amplitudes of the system. Finally the mean-square stability criterion is utilized to determine the stability condition of the system. Numerical results show that the stability boundary of the system converges as the first three modes are taken into calculation. Before the convergence is reached, the stability condition predicted is not conservative enough.

  2. A Simple Game to Derive Lognormal Distribution

    ERIC Educational Resources Information Center

    Omey, E.; Van Gulck, S.

    2007-01-01

    In the paper we present a simple game that students can play in the classroom. The game can be used to show that random variables can behave in an unexpected way: the expected mean can tend to zero or to infinity; the variance can tend to zero or to infinity. The game can also be used to introduce the lognormal distribution. (Contains 1 table and…

  3. Estimate of standard deviation for a log-transformed variable using arithmetic means and standard deviations.

    PubMed

    Quan, Hui; Zhang, Ji

    2003-09-15

    Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.

  4. Two zero-flow pressure intercepts exist in autoregulating isolated skeletal muscle.

    PubMed

    Braakman, R; Sipkema, P; Westerhof, N

    1990-06-01

    The autoregulating vascular bed of the isolated canine extensor digitorum longus muscle was investigated for the possible existence of two positive zero-flow pressure axis intercepts, a tone-dependent one and a tone-independent one. An isolated preparation, perfused with autologous blood, was used to exclude effects of collateral flow and nervous and humoral regulation while autoregulation was left intact [mean autoregulatory gain 0.50 +/- 0.24 (SD)]. In a first series of experiments, the steady-state (zero flow) pressure axis intercept [mean 8.9 +/- 2.6 (SD) mmHg, tone independent] and the instantaneous (zero flow) pressure axis intercept [mean 28.5 +/- 9.9 (SD) mmHg, tone dependent] were determined as a function of venous pressure (range: 0-45 mmHg) and were independent of venous pressure until the venous pressure exceeded their respective values. Beyond this point the relations between the venous pressure and the steady-state and instantaneous pressure axis intercept followed the line of identity. The findings agree with the predictions of the vascular waterfall model. In a second series it was shown by means of administration of vasoactive drugs that the instantaneous pressure axis intercept is tone dependent, whereas the steady-state pressure axis intercept is not. It is concluded that there is a (proximal) tone-dependent zero-flow pressure at the arteriolar level and a (distal) tone-independent zero-flow pressure at the venous level.

  5. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  6. Sampling-Based Stochastic Sensitivity Analysis Using Score Functions for RBDO Problems with Correlated Random Variables

    DTIC Science & Technology

    2010-08-01

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables

  7. The impact of inter-annual rainfall variability on African savannas changes with mean rainfall.

    PubMed

    Synodinos, Alexis D; Tietjen, Britta; Lohmann, Dirk; Jeltsch, Florian

    2018-01-21

    Savannas are mixed tree-grass ecosystems whose dynamics are predominantly regulated by resource competition and the temporal variability in climatic and environmental factors such as rainfall and fire. Hence, increasing inter-annual rainfall variability due to climate change could have a significant impact on savannas. To investigate this, we used an ecohydrological model of stochastic differential equations and simulated African savanna dynamics along a gradient of mean annual rainfall (520-780 mm/year) for a range of inter-annual rainfall variabilities. Our simulations produced alternative states of grassland and savanna across the mean rainfall gradient. Increasing inter-annual variability had a negative effect on the savanna state under dry conditions (520 mm/year), and a positive effect under moister conditions (580-780 mm/year). The former resulted from the net negative effect of dry and wet extremes on trees. In semi-arid conditions (520 mm/year), dry extremes caused a loss of tree cover, which could not be recovered during wet extremes because of strong resource competition and the increased frequency of fires. At high mean rainfall (780 mm/year), increased variability enhanced savanna resilience. Here, resources were no longer limiting and the slow tree dynamics buffered against variability by maintaining a stable population during 'dry' extremes, providing the basis for growth during wet extremes. Simultaneously, high rainfall years had a weak marginal benefit on grass cover due to density-regulation and grazing. Our results suggest that the effects of the slow tree and fast grass dynamics on tree-grass interactions will become a major determinant of the savanna vegetation composition with increasing rainfall variability. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The patient-zero problem with noisy observations

    NASA Astrophysics Data System (ADS)

    Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; Ingrosso, Alessandro; Zecchina, Riccardo

    2014-10-01

    A belief propagation approach has been recently proposed for the patient-zero problem in SIR epidemics. The patient-zero problem consists of finding the initial source of an epidemic outbreak given observations at a later time. In this work, we study a more difficult but related inference problem, in which observations are noisy and there is confusion between observed states. In addition to studying the patient-zero problem, we also tackle the problem of completing and correcting the observations to possibly find undiscovered infected individuals and false test results. Moreover, we devise a set of equations, based on the variational expression of the Bethe free energy, to find the patient-zero along with maximum-likelihood epidemic parameters. We show, by means of simulated epidemics, that this method is able to infer details on the past history of an epidemic outbreak based solely on the topology of the contact network and a single snapshot of partial and noisy observations.

  9. A randomized controlled trial investigating the effects of craniosacral therapy on pain and heart rate variability in fibromyalgia patients.

    PubMed

    Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A; Sánchez-Labraca, Nuria; Quesada-Rubio, José Manuel; Granero-Molina, José; Moreno-Lorenzo, Carmen

    2011-01-01

    Fibromyalgia is a prevalent musculoskeletal disorder associated with widespread mechanical tenderness, fatigue, non-refreshing sleep, depressed mood and pervasive dysfunction of the autonomic nervous system: tachycardia, postural intolerance, Raynaud's phenomenon and diarrhoea. To determine the effects of craniosacral therapy on sensitive tender points and heart rate variability in patients with fibromyalgia. A randomized controlled trial. Ninety-two patients with fibromyalgia were randomly assigned to an intervention group or placebo group. Patients received treatments for 20 weeks. The intervention group underwent a craniosacral therapy protocol and the placebo group received sham treatment with disconnected magnetotherapy equipment. Pain intensity levels were determined by evaluating tender points, and heart rate variability was recorded by 24-hour Holter monitoring. After 20 weeks of treatment, the intervention group showed significant reduction in pain at 13 of the 18 tender points (P < 0.05). Significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement versus baseline values were observed in the intervention group but not in the placebo group. At two months and one year post therapy, the intervention group showed significant differences versus baseline in tender points at left occiput, left-side lower cervical, left epicondyle and left greater trochanter and significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement. Craniosacral therapy improved medium-term pain symptoms in patients with fibromyalgia.

  10. Mean Velocity vs. Mean Propulsive Velocity vs. Peak Velocity: Which Variable Determines Bench Press Relative Load With Higher Reliability?

    PubMed

    García-Ramos, Amador; Pestaña-Melero, Francisco L; Pérez-Castilla, Alejandro; Rojas, Francisco J; Gregory Haff, G

    2018-05-01

    García-Ramos, A, Pestaña-Melero, FL, Pérez-Castilla, A, Rojas, FJ, and Haff, GG. Mean velocity vs. mean propulsive velocity vs. peak velocity: which variable determines bench press relative load with higher reliability? J Strength Cond Res 32(5): 1273-1279, 2018-This study aimed to compare between 3 velocity variables (mean velocity [MV], mean propulsive velocity [MPV], and peak velocity [PV]): (a) the linearity of the load-velocity relationship, (b) the accuracy of general regression equations to predict relative load (%1RM), and (c) the between-session reliability of the velocity attained at each percentage of the 1-repetition maximum (%1RM). The full load-velocity relationship of 30 men was evaluated by means of linear regression models in the concentric-only and eccentric-concentric bench press throw (BPT) variants performed with a Smith machine. The 2 sessions of each BPT variant were performed within the same week separated by 48-72 hours. The main findings were as follows: (a) the MV showed the strongest linearity of the load-velocity relationship (median r = 0.989 for concentric-only BPT and 0.993 for eccentric-concentric BPT), followed by MPV (median r = 0.983 for concentric-only BPT and 0.980 for eccentric-concentric BPT), and finally PV (median r = 0.974 for concentric-only BPT and 0.969 for eccentric-concentric BPT); (b) the accuracy of the general regression equations to predict relative load (%1RM) from movement velocity was higher for MV (SEE = 3.80-4.76%1RM) than for MPV (SEE = 4.91-5.56%1RM) and PV (SEE = 5.36-5.77%1RM); and (c) the PV showed the lowest within-subjects coefficient of variation (3.50%-3.87%), followed by MV (4.05%-4.93%), and finally MPV (5.11%-6.03%). Taken together, these results suggest that the MV could be the most appropriate variable for monitoring the relative load (%1RM) in the BPT exercise performed in a Smith machine.

  11. Variable- and Person-Centered Approaches to the Analysis of Early Adolescent Substance Use: Linking Peer, Family, and Intervention Effects with Developmental Trajectories

    ERIC Educational Resources Information Center

    Connell, Arin M.; Dishion, Thomas J.; Deater-Deckard, Kirby

    2006-01-01

    This 4-year study of 698 young adolescents examined the covariates of early onset substance use from Grade 6 through Grade 9. The youth were randomly assigned to a family-centered Adolescent Transitions Program (ATP) condition. Variable-centered (zero-inflated Poisson growth model) and person-centered (latent growth mixture model) approaches were…

  12. Means and extremes: building variability into community-level climate change experiments.

    PubMed

    Thompson, Ross M; Beardall, John; Beringer, Jason; Grace, Mike; Sardina, Paula

    2013-06-01

    Experimental studies assessing climatic effects on ecological communities have typically applied static warming treatments. Although these studies have been informative, they have usually failed to incorporate either current or predicted future, patterns of variability. Future climates are likely to include extreme events which have greater impacts on ecological systems than changes in means alone. Here, we review the studies which have used experiments to assess impacts of temperature on marine, freshwater and terrestrial communities, and classify them into a set of 'generations' based on how they incorporate variability. The majority of studies have failed to incorporate extreme events. In terrestrial ecosystems in particular, experimental treatments have reduced temperature variability, when most climate models predict increased variability. Marine studies have tended to not concentrate on changes in variability, likely in part because the thermal mass of oceans will moderate variation. In freshwaters, climate change experiments have a much shorter history than in the other ecosystems, and have tended to take a relatively simple approach. We propose a new 'generation' of climate change experiments using down-scaled climate models which incorporate predicted changes in climatic variability, and describe a process for generating data which can be applied as experimental climate change treatments. © 2013 John Wiley & Sons Ltd/CNRS.

  13. Constraints on texture zero and cofactor zero models for neutrino mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whisnant, K.; Liao, Jiajun; Marfatia, D.

    2014-06-24

    Imposing a texture or cofactor zero on the neutrino mass matrix reduces the number of independent parameters from nine to seven. Since five parameters have been measured, only two independent parameters would remain in such models. We find the allowed regions for single texture zero and single cofactor zero models. We also find strong similarities between single texture zero models with one mass hierarchy and single cofactor zero models with the opposite mass hierarchy. We show that this correspondence can be generalized to texture-zero and cofactor-zero models with the same homogeneous costraints on the elements and cofactors.

  14. Novel Zero-Heat-Flux Deep Body Temperature Measurement in Lower Extremity Vascular and Cardiac Surgery.

    PubMed

    Mäkinen, Marja-Tellervo; Pesonen, Anne; Jousela, Irma; Päivärinta, Janne; Poikajärvi, Satu; Albäck, Anders; Salminen, Ulla-Stina; Pesonen, Eero

    2016-08-01

    The aim of this study was to compare deep body temperature obtained using a novel noninvasive continuous zero-heat-flux temperature measurement system with core temperatures obtained using conventional methods. A prospective, observational study. Operating room of a university hospital. The study comprised 15 patients undergoing vascular surgery of the lower extremities and 15 patients undergoing cardiac surgery with cardiopulmonary bypass. Zero-heat-flux thermometry on the forehead and standard core temperature measurements. Body temperature was measured using a new thermometry system (SpotOn; 3M, St. Paul, MN) on the forehead and with conventional methods in the esophagus during vascular surgery (n = 15), and in the nasopharynx and pulmonary artery during cardiac surgery (n = 15). The agreement between SpotOn and the conventional methods was assessed using the Bland-Altman random-effects approach for repeated measures. The mean difference between SpotOn and the esophageal temperature during vascular surgery was+0.08°C (95% limit of agreement -0.25 to+0.40°C). During cardiac surgery, during off CPB, the mean difference between SpotOn and the pulmonary arterial temperature was -0.05°C (95% limits of agreement -0.56 to+0.47°C). Throughout cardiac surgery (on and off CPB), the mean difference between SpotOn and the nasopharyngeal temperature was -0.12°C (95% limits of agreement -0.94 to+0.71°C). Poor agreement between the SpotOn and nasopharyngeal temperatures was detected in hypothermia below approximately 32°C. According to this preliminary study, the deep body temperature measured using the zero-heat-flux system was in good agreement with standard core temperatures during lower extremity vascular and cardiac surgery. However, agreement was questionable during hypothermia below 32°C. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Random isotropic one-dimensional XY-model

    NASA Astrophysics Data System (ADS)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  16. Free variable selection QSPR study to predict 19F chemical shifts of some fluorinated organic compounds using Random Forest and RBF-PLS methods

    NASA Astrophysics Data System (ADS)

    Goudarzi, Nasser

    2016-04-01

    In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.

  17. Zero entropy continuous interval maps and MMLS-MMA property

    NASA Astrophysics Data System (ADS)

    Jiang, Yunping

    2018-06-01

    We prove that the flow generated by any continuous interval map with zero topological entropy is minimally mean-attractable and minimally mean-L-stable. One of the consequences is that any oscillating sequence is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy. In particular, the Möbius function is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy (Sarnak’s conjecture for continuous interval maps). Another consequence is a non-trivial example of a flow having discrete spectrum. We also define a log-uniform oscillating sequence and show a result in ergodic theory for comparison. This material is based upon work supported by the National Science Foundation. It is also partially supported by a collaboration grant from the Simons Foundation (grant number 523341) and PSC-CUNY awards and a grant from NSFC (grant number 11571122).

  18. Zero/zero rotorcraft certification issues. Volume 2: Plenary session presentations

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.

    1988-01-01

    This report analyzes the Zero/Zero Rotorcraft Certification Issues from the perspectives of manufacturers, operators, researchers and the FAA. The basic premise behind this analysis is that zero/zero, or at least extremely low visibility, rotorcraft operations are feasible today from both a technological and an operational standpoint. The questions and issues that need to be resolved are: What certification requirements do we need to ensure safety. Can we develop procedures which capitalize on the performance and maneuvering capabilities unique to rotorcraft. Will extremely low visibility operations be economically feasible. This is Volume 2 of three. It presents the operator perspectives (system needs), applicable technology and zero/zero concepts developed in the first 12 months of research of this project.

  19. Heart Rate and Blood Pressure Variability under Moon, Mars and Zero Gravity Conditions During Parabolic Flights

    NASA Astrophysics Data System (ADS)

    Aerts, Wouter; Joosen, Pieter; Widjaja, Devy; Varon, Carolina; Vandeput, Steven; Van Huffel, Sabine; Aubert, Andre E.

    2013-02-01

    Gravity changes during partial-G parabolic flights (0g -0.16g - 0.38g) lead to changes in modulation of the autonomic nervous system (ANS), studied via the heart rate variability (HRV) and blood pressure variability (BPV). HRV and BPV were assessed via classical time and frequency domain measures. Mean systolic and diastolic blood pressure show both increasing trends towards higher gravity levels. The parasympathetic and sympathetic modulation show both an increasing trend with decreasing gravity, although the modulation is sympathetic predominant during reduced gravity. For the mean heart rate, a non-monotonic relation was found, which can be explained by the increased influence of stress on the heart rate. This study shows that there is a relation between changes in gravity and modulations in the ANS. With this in mind, countermeasures can be developed to reduce postflight orthostatic intolerance.

  20. Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.

    2008-11-06

    This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less

  1. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    PubMed

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  2. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach

    PubMed Central

    Mohammadi, Tayeb; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493

  3. Blood pressure variability of two ambulatory blood pressure monitors.

    PubMed

    Kallem, Radhakrishna R; Meyers, Kevin E C; Cucchiara, Andrew J; Sawinski, Deirdre L; Townsend, Raymond R

    2014-04-01

    There are no data on the evaluation of blood pressure (BP) variability comparing two ambulatory blood pressure monitoring monitors worn at the same time. Hence, this study was carried out to compare variability of BP in healthy untreated adults using two ambulatory BP monitors worn at the same time over an 8-h period. An Accutorr device was used to measure office BP in the dominant and nondominant arms of 24 participants.Simultaneous 8-h BP and heart rate data were measured in 24 untreated adult volunteers by Mobil-O-Graph (worn for an additional 16 h after removing the Spacelabs monitor) and Spacelabs with both random (N=12) and nonrandom (N=12) assignment of each device to the dominant arm. Average real variability (ARV), SD, coefficient of variation, and variation independent of mean were calculated for systolic blood pressure, diastolic blood pressure, mean arterial pressure, and pulse pressure (PP). Whether the Mobil-O-Graph was applied to the dominant or the nondominant arm, the ARV of mean systolic (P=0.003 nonrandomized; P=0.010 randomized) and PP (P=0.009 nonrandomized; P=0.005 randomized) remained significantly higher than the Spacelabs device, whereas the ARV of the mean arterial pressure was not significantly different. The average BP readings and ARVs for systolic blood pressure and PP obtained by the Mobil-O-Graph were considerably higher for the daytime than the night-time. Given the emerging interest in the effect of BP variability on health outcomes, the accuracy of its measurement is important. Our study raises concerns about the accuracy of pooling international ambulatory blood pressure monitoring variability data using different devices.

  4. On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.

    PubMed

    Koyama, Shinsuke

    2015-07-01

    We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.

  5. Motor Variability Arises from a Slow Random Walk in Neural State

    PubMed Central

    Chaisanguanthum, Kris S.; Shen, Helen H.

    2014-01-01

    Even well practiced movements cannot be repeated without variability. This variability is thought to reflect “noise” in movement preparation or execution. However, we show that, for both professional baseball pitchers and macaque monkeys making reaching movements, motor variability can be decomposed into two statistical components, a slowly drifting mean and fast trial-by-trial fluctuations about the mean. The preparatory activity of dorsal premotor cortex/primary motor cortex neurons in monkey exhibits similar statistics. Although the neural and behavioral drifts appear to be correlated, neural activity does not account for trial-by-trial fluctuations in movement, which must arise elsewhere, likely downstream. The statistics of this drift are well modeled by a double-exponential autocorrelation function, with time constants similar across the neural and behavioral drifts in two monkeys, as well as the drifts observed in baseball pitching. These time constants can be explained by an error-corrective learning processes and agree with learning rates measured directly in previous experiments. Together, these results suggest that the central contributions to movement variability are not simply trial-by-trial fluctuations but are rather the result of longer-timescale processes that may arise from motor learning. PMID:25186752

  6. Indian Summer Monsoon Rainfall: Implications of Contrasting Trends in the Spatial Variability of Means and Extremes

    PubMed Central

    Ghosh, Subimal; Vittal, H.; Sharma, Tarul; Karmakar, Subhankar; Kasiviswanathan, K. S.; Dhanesh, Y.; Sudheer, K. P.; Gunthe, S. S.

    2016-01-01

    India’s agricultural output, economy, and societal well-being are strappingly dependent on the stability of summer monsoon rainfall, its variability and extremes. Spatial aggregate of intensity and frequency of extreme rainfall events over Central India are significantly increasing, while at local scale they are spatially non-uniform with increasing spatial variability. The reasons behind such increase in spatial variability of extremes are poorly understood and the trends in mean monsoon rainfall have been greatly overlooked. Here, by using multi-decadal gridded daily rainfall data over entire India, we show that the trend in spatial variability of mean monsoon rainfall is decreasing as exactly opposite to that of extremes. The spatial variability of extremes is attributed to the spatial variability of the convective rainfall component. Contrarily, the decrease in spatial variability of the mean rainfall over India poses a pertinent research question on the applicability of large scale inter-basin water transfer by river inter-linking to address the spatial variability of available water in India. We found a significant decrease in the monsoon rainfall over major water surplus river basins in India. Hydrological simulations using a Variable Infiltration Capacity (VIC) model also revealed that the water yield in surplus river basins is decreasing but it is increasing in deficit basins. These findings contradict the traditional notion of dry areas becoming drier and wet areas becoming wetter in response to climate change in India. This result also calls for a re-evaluation of planning for river inter-linking to supply water from surplus to deficit river basins. PMID:27463092

  7. Indian Summer Monsoon Rainfall: Implications of Contrasting Trends in the Spatial Variability of Means and Extremes.

    PubMed

    Ghosh, Subimal; Vittal, H; Sharma, Tarul; Karmakar, Subhankar; Kasiviswanathan, K S; Dhanesh, Y; Sudheer, K P; Gunthe, S S

    2016-01-01

    India's agricultural output, economy, and societal well-being are strappingly dependent on the stability of summer monsoon rainfall, its variability and extremes. Spatial aggregate of intensity and frequency of extreme rainfall events over Central India are significantly increasing, while at local scale they are spatially non-uniform with increasing spatial variability. The reasons behind such increase in spatial variability of extremes are poorly understood and the trends in mean monsoon rainfall have been greatly overlooked. Here, by using multi-decadal gridded daily rainfall data over entire India, we show that the trend in spatial variability of mean monsoon rainfall is decreasing as exactly opposite to that of extremes. The spatial variability of extremes is attributed to the spatial variability of the convective rainfall component. Contrarily, the decrease in spatial variability of the mean rainfall over India poses a pertinent research question on the applicability of large scale inter-basin water transfer by river inter-linking to address the spatial variability of available water in India. We found a significant decrease in the monsoon rainfall over major water surplus river basins in India. Hydrological simulations using a Variable Infiltration Capacity (VIC) model also revealed that the water yield in surplus river basins is decreasing but it is increasing in deficit basins. These findings contradict the traditional notion of dry areas becoming drier and wet areas becoming wetter in response to climate change in India. This result also calls for a re-evaluation of planning for river inter-linking to supply water from surplus to deficit river basins.

  8. On the Asymmetric Zero-Range in the Rarefaction Fan

    NASA Astrophysics Data System (ADS)

    Gonçalves, Patrícia

    2014-02-01

    We consider one-dimensional asymmetric zero-range processes starting from a step decreasing profile leading, in the hydrodynamic limit, to the rarefaction fan of the associated hydrodynamic equation. Under that initial condition, and for totally asymmetric jumps, we show that the weighted sum of joint probabilities for second class particles sharing the same site is convergent and we compute its limit. For partially asymmetric jumps, we derive the Law of Large Numbers for a second class particle, under the initial configuration in which all positive sites are empty, all negative sites are occupied with infinitely many first class particles and there is a single second class particle at the origin. Moreover, we prove that among the infinite characteristics emanating from the position of the second class particle it picks randomly one of them. The randomness is given in terms of the weak solution of the hydrodynamic equation, through some sort of renormalization function. By coupling the constant-rate totally asymmetric zero-range with the totally asymmetric simple exclusion, we derive limiting laws for more general initial conditions.

  9. Modeling health survey data with excessive zero and K responses.

    PubMed

    Lin, Ting Hsiang; Tsai, Min-Hsiao

    2013-04-30

    Zero-inflated Poisson regression is a popular tool used to analyze data with excessive zeros. Although much work has already been performed to fit zero-inflated data, most models heavily depend on special features of the individual data. To be specific, this means that there is a sizable group of respondents who endorse the same answers making the data have peaks. In this paper, we propose a new model with the flexibility to model excessive counts other than zero, and the model is a mixture of multinomial logistic and Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts, including zeros, K (where K is a positive integer) and all other values. The Poisson regression component models the counts that are assumed to follow a Poisson distribution. Two examples are provided to illustrate our models when the data have counts containing many ones and sixes. As a result, the zero-inflated and K-inflated models exhibit a better fit than the zero-inflated Poisson and standard Poisson regressions. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Dirac directional emission in anisotropic zero refractive index photonic crystals.

    PubMed

    He, Xin-Tao; Zhong, Yao-Nan; Zhou, You; Zhong, Zhi-Chao; Dong, Jian-Wen

    2015-08-14

    A certain class of photonic crystals with conical dispersion is known to behave as isotropic zero-refractive-index medium. However, the discrete building blocks in such photonic crystals are limited to construct multidirectional devices, even for high-symmetric photonic crystals. Here, we show multidirectional emission from low-symmetric photonic crystals with semi-Dirac dispersion at the zone center. We demonstrate that such low-symmetric photonic crystal can be considered as an effective anisotropic zero-refractive-index medium, as long as there is only one propagation mode near Dirac frequency. Four kinds of Dirac multidirectional emitters are achieved with the channel numbers of five, seven, eleven, and thirteen, respectively. Spatial power combination for such kind of Dirac directional emitter is also verified even when multiple sources are randomly placed in the anisotropic zero-refractive-index photonic crystal.

  11. Dirac directional emission in anisotropic zero refractive index photonic crystals

    PubMed Central

    He, Xin-Tao; Zhong, Yao-Nan; Zhou, You; Zhong, Zhi-Chao; Dong, Jian-Wen

    2015-01-01

    A certain class of photonic crystals with conical dispersion is known to behave as isotropic zero-refractive-index medium. However, the discrete building blocks in such photonic crystals are limited to construct multidirectional devices, even for high-symmetric photonic crystals. Here, we show multidirectional emission from low-symmetric photonic crystals with semi-Dirac dispersion at the zone center. We demonstrate that such low-symmetric photonic crystal can be considered as an effective anisotropic zero-refractive-index medium, as long as there is only one propagation mode near Dirac frequency. Four kinds of Dirac multidirectional emitters are achieved with the channel numbers of five, seven, eleven, and thirteen, respectively. Spatial power combination for such kind of Dirac directional emitter is also verified even when multiple sources are randomly placed in the anisotropic zero-refractive-index photonic crystal. PMID:26271208

  12. Effect of chiral symmetry on chaotic scattering from Majorana zero modes.

    PubMed

    Schomerus, H; Marciani, M; Beenakker, C W J

    2015-04-24

    In many of the experimental systems that may host Majorana zero modes, a so-called chiral symmetry exists that protects overlapping zero modes from splitting up. This symmetry is operative in a superconducting nanowire that is narrower than the spin-orbit scattering length, and at the Dirac point of a superconductor-topological insulator heterostructure. Here we show that chiral symmetry strongly modifies the dynamical and spectral properties of a chaotic scatterer, even if it binds only a single zero mode. These properties are quantified by the Wigner-Smith time-delay matrix Q=-iℏS^{†}dS/dE, the Hermitian energy derivative of the scattering matrix, related to the density of states by ρ=(2πℏ)^{-1}TrQ. We compute the probability distribution of Q and ρ, dependent on the number ν of Majorana zero modes, in the chiral ensembles of random-matrix theory. Chiral symmetry is essential for a significant ν dependence.

  13. Mean first-passage times of non-Markovian random walkers in confinement.

    PubMed

    Guérin, T; Levernier, N; Bénichou, O; Voituriez, R

    2016-06-16

    The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

  14. Mean first-passage times of non-Markovian random walkers in confinement

    NASA Astrophysics Data System (ADS)

    Guérin, T.; Levernier, N.; Bénichou, O.; Voituriez, R.

    2016-06-01

    The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

  15. Quantum Entanglement in Random Physical States

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo

    2012-07-01

    Most states in the Hilbert space are maximally entangled. This fact has proven useful to investigate—among other things—the foundations of statistical mechanics. Unfortunately, most states in the Hilbert space of a quantum many-body system are not physically accessible. We define physical ensembles of states acting on random factorized states by a circuit of length k of random and independent unitaries with local support. We study the typicality of entanglement by means of the purity of the reduced state. We find that for a time k=O(1), the typical purity obeys the area law. Thus, the upper bounds for area law are actually saturated, on average, with a variance that goes to zero for large systems. Similarly, we prove that by means of local evolution a subsystem of linear dimensions L is typically entangled with a volume law when the time scales with the size of the subsystem. Moreover, we show that for large values of k the reduced state becomes very close to the completely mixed state.

  16. Sound propagation through a variable area duct - Experiment and theory

    NASA Technical Reports Server (NTRS)

    Silcox, R. J.; Lester, H. C.

    1981-01-01

    A comparison of experiment and theory has been made for the propagation of sound through a variable area axisymmetric duct with zero mean flow. Measurement of the acoustic pressure field on both sides of the constricted test section was resolved on a modal basis for various spinning mode sources. Transmitted and reflected modal amplitudes and phase angles were compared with finite element computations. Good agreement between experiment and computation was obtained over a wide range of frequencies and modal transmission variations. The study suggests that modal transmission through a variable area duct is governed by the throat modal cut-off ratio.

  17. 40 CFR 180.5 - Zero tolerances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... tolerances. A zero tolerance means that no amount of the pesticide chemical may remain on the raw... raw agricultural commodity may be established because, among other reasons: (a) A safe level of the pesticide chemical in the diet of two different species of warm-blooded animals has not been reliably...

  18. 40 CFR 180.5 - Zero tolerances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... tolerances. A zero tolerance means that no amount of the pesticide chemical may remain on the raw... raw agricultural commodity may be established because, among other reasons: (a) A safe level of the pesticide chemical in the diet of two different species of warm-blooded animals has not been reliably...

  19. Marginalized zero-inflated Poisson models with missing covariates.

    PubMed

    Benecha, Habtamu K; Preisser, John S; Divaris, Kimon; Herring, Amy H; Das, Kalyan

    2018-05-11

    Unlike zero-inflated Poisson regression, marginalized zero-inflated Poisson (MZIP) models for counts with excess zeros provide estimates with direct interpretations for the overall effects of covariates on the marginal mean. In the presence of missing covariates, MZIP and many other count data models are ordinarily fitted using complete case analysis methods due to lack of appropriate statistical methods and software. This article presents an estimation method for MZIP models with missing covariates. The method, which is applicable to other missing data problems, is illustrated and compared with complete case analysis by using simulations and dental data on the caries preventive effects of a school-based fluoride mouthrinse program. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Time to rehabilitation in the burn population: incidence of zero onset days in the UDSMR national dataset.

    PubMed

    Schneider, Jeffrey C; Tan, Wei-Han; Goldstein, Richard; Mix, Jacqueline M; Niewczyk, Paulette; Divita, Margaret A; Ryan, Colleen M; Gerrard, Paul B; Kowalske, Karen; Zafonte, Ross

    2013-01-01

    A preliminary investigation of the burn rehabilitation population found a large variability of zero onset day frequency between facilities. Onset days is defined as the time from injury to inpatient rehabilitation admission; this variable has not been investigated in burn patients previously. This study explored if this finding was a facility-based phenomena or characteristic of burn inpatient rehabilitation patients. This study was a secondary analysis of Uniform Data System for Medical Rehabilitation (UDSmr) data from 2002 to 2007 examining inpatient rehabilitation characteristics among patients with burn injuries. Exclusion criteria were age less than 18 years and discharge against medical advice. Comparisons of demographic, medical and functional data were made between facilities with a high frequency of zero onset days versus facilities with a low frequency of zero onset days. A total of 4738 patients from 455 inpatient rehabilitation facilities were included. Twenty-three percent of the population exhibited zero onset days (n = 1103). Sixteen facilities contained zero onset patients; two facilities accounted for 97% of the zero onset subgroup. Facilities with a high frequency of zero onset day patients demonstrated significant differences in demographic, medical, and functional variables compared to the remainder of the study population. There were significantly more zero onset day admissions among burn patients (23%) than other diagnostic groups (0.5- 3.6%) in the Uniform Data System for Medical Rehabilitation database, but the majority (97%) came from two inpatient rehabilitation facilities. It is unexpected for patients with significant burn injury to be admitted to a rehabilitation facility on the day of injury. Future studies investigating burn rehabilitation outcomes using the Uniform Data System for Medical Rehabilitation database should exclude facilities with a high percentage of zero onset days, which are not representative of the burn inpatient

  1. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  2. Axial Fatigue Tests at Zero Mean Stress of 24S-T and 75S-T Aluminum-alloy Strips with a Central Circular Hole

    NASA Technical Reports Server (NTRS)

    Brueggeman, W C; Mayer, M JR

    1948-01-01

    Axial fatigue tests at zero mean stress have been made on 0.032- and 0.064-inch 24S-T and 0.032-inch 75S-T sheet-metal specimens 1/4, 1/2, 1, and 2 inches wide without a hole and with central holes giving a range of hole diameter D to specimen width W from 0.01 to 0.95. No systematic difference was noted between the results for the 0.032-inch and the 0.064-inch specimens although the latter seemed the more consistent. In general the fatigue strength based on the minimum section dropped sharply as the ration D/W was increased from zero to about 0.25. The plain specimens showed quite a pronounced decrease in fatigue strength with increasing width. The holed specimens showed only slight and rather inconclusive evidence of this size effect. The fatigue stress-concentration factor was higher for 75S-T than for 24S-T alloy. Evidence was found that a very small hole would not cause any reduction in fatigue strength.

  3. Conditional modeling of antibody titers using a zero-inflated poisson random effects model: application to Fabrazyme.

    PubMed

    Bonate, Peter L; Sung, Crystal; Welch, Karen; Richards, Susan

    2009-10-01

    Patients that are exposed to biotechnology-derived therapeutics often develop antibodies to the therapeutic, the magnitude of which is assessed by measuring antibody titers. A statistical approach for analyzing antibody titer data conditional on seroconversion is presented. The proposed method is to first transform the antibody titer data based on a geometric series using a common ratio of 2 and a scale factor of 50 and then analyze the exponent using a zero-inflated or hurdle model assuming a Poisson or negative binomial distribution with random effects to account for patient heterogeneity. Patient specific covariates can be used to model the probability of developing an antibody response, i.e., seroconversion, as well as the magnitude of the antibody titer itself. The method was illustrated using antibody titer data from 87 male seroconverted Fabry patients receiving Fabrazyme. Titers from five clinical trials were collected over 276 weeks of therapy with anti-Fabrazyme IgG titers ranging from 100 to 409,600 after exclusion of seronegative patients. The best model to explain seroconversion was a zero-inflated Poisson (ZIP) model where cumulative dose (under a constant dose regimen of dosing every 2 weeks) influenced the probability of seroconversion. There was an 80% chance of seroconversion when the cumulative dose reached 210 mg (90% confidence interval: 194-226 mg). No difference in antibody titers was noted between Japanese or Western patients. Once seroconverted, antibody titers did not remain constant but decreased in an exponential manner from an initial magnitude to a new lower steady-state value. The expected titer after the new steady-state titer had been achieved was 870 (90% CI: 630-1109). The half-life to the new steady-state value after seroconversion was 44 weeks (90% CI: 17-70 weeks). Time to seroconversion did not appear to be correlated with titer at the time of seroconversion. The method can be adequately used to model antibody titer data.

  4. Perceptual disturbances predicted in zero-g through three-dimensional modeling.

    PubMed

    Holly, Jan E

    2003-01-01

    Perceptual disturbances in zero-g and 1-g differ. For example, the vestibular coriolis (or "cross-coupled") effect is weaker in zero-g. In 1-g, blindfolded subjects rotating on-axis experience perceptual disturbances upon head tilt, but the effects diminish in zero-g. Head tilts during centrifugation in zero-g and 1-g are investigated here by means of three-dimensional modeling, using a model that was previously used to explain the zero-g reduction of the on-axis vestibular coriolis effect. The model's foundation comprises the laws of physics, including linear-angular interactions in three dimensions. Addressed is the question: In zero-g, will the vestibular coriolis effect be as weak during centrifugation as during on-axis rotation? Centrifugation in 1-g was simulated first, with the subject supine, head toward center. The most noticeable result concerned direction of head yaw. For clockwise centrifuge rotation, greater perceptual effects arose in simulations during yaw counterclockwise (as viewed from the top of the head) than for yaw clockwise. Centrifugation in zero-g was then simulated with the same "supine" orientation. The result: In zero-g the simulated vestibular coriolis effect was greater during centrifugation than during on-axis rotation. In addition, clockwise-counterclockwise differences did not appear in zero-g, in contrast to the differences that appear in 1-g.

  5. The mean and variability of a floral trait have opposing effects on fitness traits

    PubMed Central

    Dai, Can; Liang, Xijian; Ren, Jie; Liao, Minglin; Li, Jiyang; Galloway, Laura F.

    2016-01-01

    Background and Aims Floral traits are essential for ensuring successful pollination and reproduction in flowering plants. In particular, style and anther positions are key for pollination accuracy and efficiency. Variation in these traits among individuals has been well studied, but less is known about variation within flowers and plants and its effect on pollination and reproductive success. Methods Style deflexion is responsible for herkogamy and important for pollen deposition in Passiflora incarnata. The degree of deflexion may vary among stigmas within flowers as well as among flowers. We measured the variability of style deflexion at both the flower and the plant level. The fitness consequences of the mean and variation of style deflexion were then evaluated under natural pollination by determining their relationship to pollen deposition, seed production and average seed weight using structural equation modelling. In addition, the relationship between style deflexion and self-pollen deposition was estimated in a greenhouse experiment. Key Results We found greater variation in style deflexion within flowers and plants than among plants. Variation of style deflexion at the flower and plant level was positively correlated, suggesting that variability in style deflexion may be a distinct trait in P. incarnata. Lower deflexion and reduced variation in that deflexion increased pollen deposition, which in turn increased seed number. However, lower styles also increased self-pollen deposition. In contrast, higher deflexion and greater variability of that deflexion increased variation in pollen deposition, which resulted in heavier seeds. Conclusions Variability of style deflexion and therefore stigma placement, independent from the mean, appears to be a property of individual P. incarnata plants. The mean and variability of style deflexion in P. incarnata affected seed number and seed weight in contrasting ways, through the quantity and potentially quality of pollen

  6. Nasal Jet-CPAP (variable flow) versus Bubble-CPAP in preterm infants with respiratory distress: an open label, randomized controlled trial.

    PubMed

    Bhatti, A; Khan, J; Murki, S; Sundaram, V; Saini, S S; Kumar, P

    2015-11-01

    To compare the failure rates between Jet continuous positive airway pressure device (J-CPAP-variable flow) and Bubble continuous positive airway device (B-CPAP) in preterm infants with respiratory distress. Preterm newborns <34 weeks gestation with onset of respiratory distress within 6 h of life were randomized to receive J-CPAP (a variable flow device) or B-CPAP (continuous flow device). A standardized protocol was followed for titration, weaning and removal of CPAP. Pressure was monitored close to the nares in both the devices every 6 hours and settings were adjusted to provide desired CPAP. The primary outcome was CPAP failure rate within 72 h of life. Secondary outcomes were CPAP failure within 7 days of life, need for surfactant post-randomization, time to CPAP failure, duration of CPAP and complications of prematurity. An intention to treat analysis was done. One-hundred seventy neonates were randomized, 80 to J-CPAP and 90 to B-CPAP. CPAP failure rates within 72 h were similar in infants who received J-CPAP and in those who received B-CPAP (29 versus 21%; relative risks 1.4 (0.8 to 2.3), P=0.25). Mean (95% confidence intervals) time to CPAP failure was 59 h (54 to 64) in the Jet CPAP group in comparison with 65 h (62 to 68) in the Bubble CPAP group (log rank P=0.19). All other secondary outcomes were similar between the two groups. In preterm infants with respiratory distress starting within 6 h of life, CPAP failure rates were similar with Jet CPAP and Bubble CPAP.

  7. Zero Thermal Noise in Resistors at Zero Temperature

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran

    2016-06-01

    The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.

  8. Interpreting findings from Mendelian randomization using the MR-Egger method.

    PubMed

    Burgess, Stephen; Thompson, Simon G

    2017-05-01

    Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.

  9. Vulnerability of Karangkates dams area by means of zero crossing analysis of data magnetic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunaryo,, E-mail: sunaryo@ub.ac.id, E-mail: sunaryo.geofis.ub@gmail.com; Susilo, Adi

    2015-04-24

    Study with entitled Vulnerability Karangkates Dam Area By Means of Zero Crossing Analysis of Data Magnetic has been done. The study was aimed to obtain information on the vulnerability of two parts area of Karangkates dams, i.e. Lahor dam which was inaugurated in 1977 and Sutami dam inaugurated in 1981. Three important things reasons for this study are: 1). The dam age was 36 years old for Lahor dam and 32 years old for Sutami dam, 2). Geologically, the location of the dams are closed together to the Pohgajih local shear fault, Selorejo local fault, and Selorejo limestone-andesite rocks contactmore » plane, and 3). Karangkates dams is one of the important Hydro Power Plant PLTA with the generating power of about 400 million KWH per year from a total of about 29.373MW installed in Indonesia. Geographically, the magnetic data acquisition was conducted at coordinates (112.4149oE;-8.2028oS) to (112.4839oE;-8.0989oS) by using Proton Precession Magnetometer G-856. Magnetic Data acquisition was conducted in the radial direction from the dams with diameter of about 10 km and the distance between the measurements about 500m. The magnetic data acquisition obtained the distribution of total magnetic field value in the range of 45800 nT to 44450 nT. Residual anomalies obtained by doing some corrections, including diurnal correction, International Geomagnetic Reference Field (IGRF) correction, and reductions so carried out the distribution of the total magnetic field value in the range of -650 nT to 700 nT. Based on the residual anomalies, indicate the presence of 2 zones of closed closures dipole pairs at located in the west of the Sutami dam and the northwest of the Lahor dam from 5 total zones. Overlapping on the local geological map indicated the lineament of zero crossing patterns in the contour of residual anomaly contour with the Pohgajih shear fault where located at about 4 km to the west of the Sutami dam approximately and andesite-limestone rocks contact where

  10. Statistics of zero crossings in rough interfaces with fractional elasticity

    NASA Astrophysics Data System (ADS)

    Zamorategui, Arturo L.; Lecomte, Vivien; Kolton, Alejandro B.

    2018-04-01

    We study numerically the distribution of zero crossings in one-dimensional elastic interfaces described by an overdamped Langevin dynamics with periodic boundary conditions. We model the elastic forces with a Riesz-Feller fractional Laplacian of order z =1 +2 ζ , such that the interfaces spontaneously relax, with a dynamical exponent z , to a self-affine geometry with roughness exponent ζ . By continuously increasing from ζ =-1 /2 (macroscopically flat interface described by independent Ornstein-Uhlenbeck processes [Phys. Rev. 36, 823 (1930), 10.1103/PhysRev.36.823]) to ζ =3 /2 (super-rough Mullins-Herring interface), three different regimes are identified: (I) -1 /2 <ζ <0 , (II) 0 <ζ <1 , and (III) 1 <ζ <3 /2 . Starting from a flat initial condition, the mean number of zeros of the discretized interface (I) decays exponentially in time and reaches an extensive value in the system size, or decays as a power-law towards (II) a subextensive or (III) an intensive value. In the steady state, the distribution of intervals between zeros changes from an exponential decay in (I) to a power-law decay P (ℓ ) ˜ℓ-γ in (II) and (III). While in (II) γ =1 -θ with θ =1 -ζ the steady-state persistence exponent, in (III) we obtain γ =3 -2 ζ , different from the exponent γ =1 expected from the prediction θ =0 for infinite super-rough interfaces with ζ >1 . The effect on P (ℓ ) of short-scale smoothening is also analyzed numerically and analytically. A tight relation between the mean interval, the mean width of the interface, and the density of zeros is also reported. The results drawn from our analysis of rough interfaces subject to particular boundary conditions or constraints, along with discretization effects, are relevant for the practical analysis of zeros in interface imaging experiments or in numerical analysis.

  11. Comment on ‘The paradoxical zero reflection at zero energy’

    NASA Astrophysics Data System (ADS)

    van Dijk, W.; Nogami, Y.

    2017-05-01

    We point out that the anomalous threshold effect in one dimension occurs when the reflection probability at zero energy R(0) has some other value than unity, rather than R(0)=0 or R(0)\\ll 1 as implied by Ahmed et al in their paper entitled ‘The paradoxical zero reflection at zero energy’ (2017 Eur. J. Phys. 38 025401).

  12. Evaluation of variable selection methods for random forests and omics data sets.

    PubMed

    Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke

    2017-10-16

    Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.

  13. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  14. A random matrix approach to credit risk.

    PubMed

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  15. A Random Matrix Approach to Credit Risk

    PubMed Central

    Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864

  16. Thermal Simulation of a Zero Energy Glazed Pavilion in Sofia, Bulgaria. New Strategies for Energy Management by Means of Water Flow Glazing

    NASA Astrophysics Data System (ADS)

    del Ama Gonzalo, Fernando; Hernandez Ramos, Juan A.; Moreno, Belen

    2017-10-01

    The building sector is primarily responsible for a major part of total energy consumption. The European Energy Performance of Buildings Directives (EPBD) emphasized the need to reduce the energy consumption in buildings, and put forward the rationale for developing Near to Zero Energy Buildings (NZEB). Passive and active strategies help architects to minimize the use of active HVAC systems, taking advantage of the available natural resources such as solar radiation, thermal variability and daylight. The building envelope plays a decisive role in passive and active design strategies. The ideal transparent façade would be one with optical properties, such as Solar Heat Gain Coefficient (SHGC) and Visible Transmittance (VT), that could readily adapt in response to changing climatic conditions or occupant preferences. The aim of this article consists of describing the system to maintain a small glazed pavilion located in Sofia (Bulgaria) at the desired interior temperature over a whole year. The system comprises i) the use of Water Flow Glazing facades (WFG) and Radiant Interior Walls (RIW), ii) the use of free cooling devices along with traditional heat pump connected to photo-voltaic panels and iii) the use of a new Energy Management System that collects data and acts accordingly by controlling all components. The effect of these strategies and the use of active systems, like Water Flow Glazing, are analysed by means of simulating the prototype over one year. Summer and Winter energy management strategies are discussed in order to change the SHGC value of the Water Flow Glazing and thus, reduce the required energy to maintain comfort conditions.

  17. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Resonant paramagnetic enhancement of the thermal and zero-point Nyquist noise

    NASA Astrophysics Data System (ADS)

    França, H. M.; Santos, R. B. B.

    1999-01-01

    The interaction between a very thin macroscopic solenoid, and a single magnetic particle precessing in a external magnetic field B0, is described by taking into account the thermal and the zero-point fluctuations of stochastic electrodynamics. The inductor belongs to a RLC circuit without batteries and the random motion of the magnetic dipole generates in the solenoid a fluctuating current Idip( t), and a fluctuating voltage εdip( t), with spectral distribution quite different from the Nyquist noise. We show that the mean square value < Idip2> presents an enormous variation when the frequency of precession approaches the frequency of the circuit, but it is still much smaller than the Nyquist current in the circuit. However, we also show that < Idip2> can reach measurable values if the inductor is interacting with a macroscopic sample of magnetic particles (atoms or nuclei) which are close enough to its coils.

  19. Network Mendelian randomization: using genetic variants as instrumental variables to investigate mediation in causal pathways

    PubMed Central

    Burgess, Stephen; Daniel, Rhian M; Butterworth, Adam S; Thompson, Simon G

    2015-01-01

    Background: Mendelian randomization uses genetic variants, assumed to be instrumental variables for a particular exposure, to estimate the causal effect of that exposure on an outcome. If the instrumental variable criteria are satisfied, the resulting estimator is consistent even in the presence of unmeasured confounding and reverse causation. Methods: We extend the Mendelian randomization paradigm to investigate more complex networks of relationships between variables, in particular where some of the effect of an exposure on the outcome may operate through an intermediate variable (a mediator). If instrumental variables for the exposure and mediator are available, direct and indirect effects of the exposure on the outcome can be estimated, for example using either a regression-based method or structural equation models. The direction of effect between the exposure and a possible mediator can also be assessed. Methods are illustrated in an applied example considering causal relationships between body mass index, C-reactive protein and uric acid. Results: These estimators are consistent in the presence of unmeasured confounding if, in addition to the instrumental variable assumptions, the effects of both the exposure on the mediator and the mediator on the outcome are homogeneous across individuals and linear without interactions. Nevertheless, a simulation study demonstrates that even considerable heterogeneity in these effects does not lead to bias in the estimates. Conclusions: These methods can be used to estimate direct and indirect causal effects in a mediation setting, and have potential for the investigation of more complex networks between multiple interrelated exposures and disease outcomes. PMID:25150977

  20. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  1. Zero-mode waveguides

    DOEpatents

    Levene, Michael J.; Korlach, Jonas; Turner, Stephen W.; Craighead, Harold G.; Webb, Watt W.

    2007-02-20

    The present invention is directed to a method and an apparatus for analysis of an analyte. The method involves providing a zero-mode waveguide which includes a cladding surrounding a core where the cladding is configured to preclude propagation of electromagnetic energy of a frequency less than a cutoff frequency longitudinally through the core of the zero-mode waveguide. The analyte is positioned in the core of the zero-mode waveguide and is then subjected, in the core of the zero-mode waveguide, to activating electromagnetic radiation of a frequency less than the cut-off frequency under conditions effective to permit analysis of the analyte in an effective observation volume which is more compact than if the analysis were carried out in the absence of the zero-mode waveguide.

  2. What to use to express the variability of data: Standard deviation or standard error of mean?

    PubMed

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  3. Nano-scale zero valent iron transport in a variable aperture dolomite fracture and a glass fracture

    NASA Astrophysics Data System (ADS)

    Mondal, P.; Sleep, B. E.; Cui, Z.; Zhou, Z.

    2014-12-01

    Experiments and numerical simulations are being performed to understand the transport behavior of carboxymethyl cellulose polymer stabilized nano-scale zero valent iron (nZVI) in a variable aperture dolomite rock fracture and a variable aperture glass replica of a fractured slate. The rock fracture was prepared by artificially inducing a fracture in a dolomite block along a stylolite, and the glass fracture was prepared by creating molds with melted glass on two opposing sides of a fractured slate rock block. Both of the fractures were 0.28 m in length and 0.21 m in width. Equivalent hydraulic apertures are about 110 microns for the rock fracture and 250 microns for the glass replica fracture. Sodium bromide and lissamine green B (LGB) serve as conservative tracers in the rock fracture and glass replica fracture, respectively. A dark box set-up with a light source and digital camera is being used to visualize the LGB and CMC-nZVI movement in the glass fracture. Experiments are being performed to determine the effects of water specific discharge and CMC concentration on nZVI transport in the fractures. Transmission electron microscopy, dynamic light scattering, and UV-visual spectrophotometry were performed to determine the stability and characteristics of the CMC-nZVI mixture. The transport of bromide, LGB, CMC, and CMC-nZVI in both fractures is being evaluated through analysis of the effluent concentrations. Time-lapse images are also being captured for the glass fracture. Bromide, LGB, and CMC recoveries have exceeded 95% in both fractures. Significant channeling has been observed in the fractures for CMC transport due to viscous effects.

  4. On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, A. K.

    1973-01-01

    Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.

  5. Zero-Bounded Limits as a Special Case of the Squeeze Theorem for Evaluating Single-Variable and Multivariable Limits

    ERIC Educational Resources Information Center

    Gkioulekas, Eleftherios

    2013-01-01

    Many limits, typically taught as examples of applying the "squeeze" theorem, can be evaluated more easily using the proposed zero-bounded limit theorem. The theorem applies to functions defined as a product of a factor going to zero and a factor that remains bounded in some neighborhood of the limit. This technique is immensely useful…

  6. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  7. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  8. Mean Comparison: Manifest Variable versus Latent Variable

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2006-01-01

    An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…

  9. Density of states, Potts zeros, and Fisher zeros of the Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seung-Yeon; Creswick, Richard J.

    2001-06-01

    The Q-state Potts model can be extended to noninteger and even complex Q by expressing the partition function in the Fortuin-Kasteleyn (F-K) representation. In the F-K representation the partition function Z(Q,a) is a polynomial in Q and v=a{minus}1 (a=e{sup {beta}J}) and the coefficients of this polynomial, {Phi}(b,c), are the number of graphs on the lattice consisting of b bonds and c connected clusters. We introduce the random-cluster transfer matrix to compute {Phi}(b,c) exactly on finite square lattices with several types of boundary conditions. Given the F-K representation of the partition function we begin by studying the critical Potts model Z{submore » CP}=Z(Q,a{sub c}(Q)), where a{sub c}(Q)=1+{radical}Q. We find a set of zeros in the complex w={radical}Q plane that map to (or close to) the Beraha numbers for real positive Q. We also identify {tilde Q}{sub c}(L), the value of Q for a lattice of width L above which the locus of zeros in the complex p=v/{radical}Q plane lies on the unit circle. By finite-size scaling we find that 1/{tilde Q}{sub c}(L){r_arrow}0 as L{r_arrow}{infinity}. We then study zeros of the antiferromagnetic (AF) Potts model in the complex Q plane and determine Q{sub c}(a), the largest value of Q for a fixed value of a below which there is AF order. We find excellent agreement with Baxter{close_quote}s conjecture Q{sub c}{sup AF}(a)=(1{minus}a)(a+3). We also investigate the locus of zeros of the ferromagnetic Potts model in the complex Q plane and confirm that Q{sub c}{sup FM}(a)=(a{minus}1){sup 2}. We show that the edge singularity in the complex Q plane approaches Q{sub c} as Q{sub c}(L){similar_to}Q{sub c}+AL{sup {minus}y{sub q}}, and determine the scaling exponent y{sub q} for several values of Q. Finally, by finite-size scaling of the Fisher zeros near the antiferromagnetic critical point we determine the thermal exponent y{sub t} as a function of Q in the range 2{le}Q{le}3. Using data for lattices of size 3{le}L{le}8 we find

  10. The Trouble with Zero

    ERIC Educational Resources Information Center

    Lewis, Robert

    2015-01-01

    The history of the number zero is an interesting one. In early times, zero was not used as a number at all, but instead was used as a place holder to indicate the position of hundreds and tens. This article briefly discusses the history of zero and challenges the thinking where divisions using zero are used.

  11. Zero/zero rotorcraft certification issues. Volume 3: Working group results

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.

    1988-01-01

    This report analyzes the Zero/Zero Rotorcraft Certification Issues from the perspectives of manufacturers, operators, researchers and the FAA. The basic premise behind this analysis is that zero/zero, or at least extremely low visibility, rotorcraft operations are feasible today from both a technological and an operational standpoint. The questions and issues that need to be resolved are: What certification requirements do we need to ensure safety. Can we develop procedures which capitalize on the performance and maneuvering capabilities unique to rotorcraft. Will extremely low visibility operations be economically feasible. This is Volume 3 of three. It provides the issue-by-issue deliberations of the experts involved in the Working Groups assigned to deal with them in the Issues Forum.

  12. Nonlinear zero-sum differential game analysis by singular perturbation methods

    NASA Technical Reports Server (NTRS)

    Sinar, J.; Farber, N.

    1982-01-01

    A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.

  13. SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)

    PubMed Central

    Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik

    2013-01-01

    We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172

  14. Random Effects: Variance Is the Spice of Life.

    PubMed

    Jupiter, Daniel C

    Covariates in regression analyses allow us to understand how independent variables of interest impact our dependent outcome variable. Often, we consider fixed effects covariates (e.g., gender or diabetes status) for which we examine subjects at each value of the covariate. We examine both men and women and, within each gender, examine both diabetic and nondiabetic patients. Occasionally, however, we consider random effects covariates for which we do not examine subjects at every value. For example, we examine patients from only a sample of hospitals and, within each hospital, examine both diabetic and nondiabetic patients. The random sampling of hospitals is in contrast to the complete coverage of all genders. In this column I explore the differences in meaning and analysis when thinking about fixed and random effects variables. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Zero energy resonance and the logarithmically slow decay of unstable multilevel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, Manabu

    2006-08-15

    The long time behavior of the reduced time evolution operator for unstable multilevel systems is studied based on the N-level Friedrichs model in the presence of a zero energy resonance. The latter means the divergence of the resolvent at zero energy. Resorting to the technique developed by Jensen and Kato [Duke Math. J. 46, 583 (1979)], the zero energy resonance of this model is characterized by the zero energy eigenstate that does not belong to the Hilbert space. It is then shown that for some kinds of the rational form factors the logarithmically slow decay proportional to (log t){sup -1}more » of the reduced time evolution operator can be realized.« less

  16. Mean-level change and intraindividual variability in self-esteem and depression among high-risk children

    PubMed Central

    Kim, Jungmeen; Cicchetti, Dante

    2012-01-01

    This study investigated mean-level changes and intraindividual variability of self-esteem among maltreated (n=142) and nonmaltreated (n=109) school-aged children from low-income families. Longitudinal factor analysis revealed higher temporal stability of self-esteem among maltreated children compared to nonmaltreated children. Cross-domain latent growth curve models indicated that nonmaltreated children showed higher initial levels and greater increases in self-esteem than maltreated children, and that the initial levels of self-esteem were significantly associated with depressive symptoms among maltreated and nonmaltreated children. The average level (mean of repeated measurements) of self-esteem was predictive of depression at the final occasion for both maltreated and nonmaltreated children. For nonmaltreated children intraindividual variability of self-esteem had a direct contribution to prediction of depression. The findings enhance our understanding of developmental changes in self-esteem and the role of the average level and within-person variability of self-esteem in predicting depressive symptoms among high-risk children. PMID:22822280

  17. New approach for identifying the zero-order fringe in variable wavelength interferometry

    NASA Astrophysics Data System (ADS)

    Galas, Jacek; Litwin, Dariusz; Daszkiewicz, Marek

    2016-12-01

    The family of VAWI techniques (for transmitted and reflected light) is especially efficient for characterizing objects, when in the interference system the optical path difference exceeds a few wavelengths. The classical approach that consists in measuring the deflection of interference fringes fails because of strong edge effects. Broken continuity of interference fringes prevents from correct identification of the zero order fringe, which leads to significant errors. The family of these methods has been proposed originally by Professor Pluta in the 1980s but that time image processing facilities and computers were hardly available. Automated devices unfold a completely new approach to the classical measurement procedures. The Institute team has taken that new opportunity and transformed the technique into fully automated measurement devices offering commercial readiness of industry-grade quality. The method itself has been modified and new solutions and algorithms simultaneously have extended the field of application. This has concerned both construction aspects of the systems and software development in context of creating computerized instruments. The VAWI collection of instruments constitutes now the core of the Institute commercial offer. It is now practically applicable in industrial environment for measuring textile and optical fibers, strips of thin films, testing of wave plates and nonlinear affects in different materials. This paper describes new algorithms for identifying the zero order fringe, which increases the performance of the system as a whole and presents some examples of measurements of optical elements.

  18. A density-functional study of the phase diagram of cementite-type (Fe,Mn)3C at absolute zero temperature.

    PubMed

    Von Appen, Jörg; Eck, Bernhard; Dronskowski, Richard

    2010-11-15

    The phase diagram of (Fe(1-x) Mn(x))(3)C has been investigated by means of density-functional theory (DFT) calculations at absolute zero temperature. The atomic distributions of the metal atoms are not random-like as previously proposed but we find three different, ordered regions within the phase range. The key role is played by the 8d metal site which forms, as a function of the composition, differing magnetic layers, and these dominate the physical properties. We calculated the magnetic moments, the volumes, the enthalpies of mixing and formation of 13 different compositions and explain the changes of the macroscopic properties with changes in the electronic and magnetic structures by means of bonding analyses using the Crystal Orbital Hamilton Population (COHP) technique. 2010 Wiley Periodicals, Inc.

  19. Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance.

    PubMed

    Jiang, Min; Wu, Teng; Blanchard, John W; Feng, Guanru; Peng, Xinhua; Budker, Dmitry

    2018-06-01

    Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information-inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13 C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics.

  20. Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance

    PubMed Central

    Feng, Guanru

    2018-01-01

    Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information–inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics. PMID:29922714

  1. A new multivariate zero-adjusted Poisson model with applications to biomedicine.

    PubMed

    Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen

    2018-05-25

    Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Zero: A "None" Number?

    ERIC Educational Resources Information Center

    Anthony, Glenda J.; Walshaw, Margaret A.

    2004-01-01

    This article discusses the challenges students face in making sense of zero as a number. A range of different student responses to a computation problem involving zero reveal students' different understandings of zero.

  3. Variable stiffness colonoscope versus regular adult colonoscope: meta-analysis of randomized controlled trials.

    PubMed

    Othman, M O; Bradley, A G; Choudhary, A; Hoffman, R M; Roy, P K

    2009-01-01

    The variable stiffness colonoscope (VSC) may have theoretical advantages over standard adult colonoscopes (SACs), though data are conflicting. We conducted a meta-analysis to compare the efficacies of the VSC and SAC. We searched Medline (1966 - 2008) and abstracts of gastroenterology scientific meetings in the 5 years to February 2008, only for randomized clinical trials (RCTs) of adult patients. Trial quality was assessed using the Delphi list. In a meta-analysis with a fixed effects model, cecal intubation rates, cecal intubation times, abdominal pain scores, sedation used, and use of ancillary maneuvers, were compared in separate analyses, using weighted mean differences (WMDs), standardized mean differences (SMDs), or odds ratios (ORs). Seven RCTs satisfied the inclusion criteria (1923 patients), four comparing VSC with SAC procedures in adults, and three evaluating the pediatric VSC. There was no significant heterogeneity among the studies. The overall trial quality was adequate. Cecal intubation rate was higher with the use of VSC (OR = 2.08, 95 % confidence interval [CI] 1.29 to 3.36). The VSC was associated with lower abdominal pain scores and a decreased need for sedation during colonoscopy. Cecal intubation time was similar for the two colonscope types (WMD = - 0.21 minutes, 95 % CI - 0.85 to 0.43). Because of the nature of the intervention no studies were blinded. There was no universal method for using the VSC. Compared with the SAC, VSC use was associated with a higher cecal intubation rate, less abdominal pain, and decreased need for sedation. However, cecal intubation times were similar for the two colonoscope types.

  4. Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions

    ERIC Educational Resources Information Center

    Desjardins, Christopher David

    2016-01-01

    The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…

  5. Quantized vortices in the ideal bose gas: a physical realization of random polynomials.

    PubMed

    Castin, Yvan; Hadzibabic, Zoran; Stock, Sabine; Dalibard, Jean; Stringari, Sandro

    2006-02-03

    We propose a physical system allowing one to experimentally observe the distribution of the complex zeros of a random polynomial. We consider a degenerate, rotating, quasi-ideal atomic Bose gas prepared in the lowest Landau level. Thermal fluctuations provide the randomness of the bosonic field and of the locations of the vortex cores. These vortices can be mapped to zeros of random polynomials, and observed in the density profile of the gas.

  6. Interannual variability of mean sea level and its sensitivity to wind climate in an inter-tidal basin

    NASA Astrophysics Data System (ADS)

    Gerkema, Theo; Duran-Matute, Matias

    2017-12-01

    The relationship between the annual wind records from a weather station and annual mean sea level in an inter-tidal basin, the Dutch Wadden Sea, is examined. Recent, homogeneous wind records are used, covering the past 2 decades. It is demonstrated that even such a relatively short record is sufficient for finding a convincing relationship. The interannual variability of mean sea level is largely explained by the west-east component of the net wind energy, with some further improvement if one also includes the south-north component and the annual mean atmospheric pressure. Using measured data from a weather station is found to give a slight improvement over reanalysis data, but for both the correlation between annual mean sea level and wind energy in the west-east direction is high. For different tide gauge stations in the Dutch Wadden Sea and along the coast, we find the same qualitative characteristics, but even within this small region, different locations show a different sensitivity of annual mean sea level to wind direction. Correcting observed values of annual mean level for meteorological factors reduces the margin of error (expressed as 95 % confidence interval) by more than a factor of 4 in the trends of the 20-year sea level record. Supplementary data from a numerical hydrodynamical model are used to illustrate the regional variability in annual mean sea level and its interannual variability at a high spatial resolution. This study implies that climatic changes in the strength of winds from a specific direction may affect local annual mean sea level quite significantly.

  7. Image discrimination models predict detection in fixed but not random noise

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)

    1997-01-01

    By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.

  8. Multivariate non-normally distributed random variables in climate research - introduction to the copula approach

    NASA Astrophysics Data System (ADS)

    Schölzel, C.; Friederichs, P.

    2008-10-01

    Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF) tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.

  9. Generating random numbers by means of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Zang, Jiaqi; Hu, Haojie; Zhong, Juhua; Luo, Duanbin; Fang, Yi

    2018-07-01

    To introduce the randomness of a physical process to students, a chaotic pendulum experiment was opened in East China University of Science and Technology (ECUST) on the undergraduate level in the physics department. It was shown chaotic motion could be initiated through adjusting the operation of a chaotic pendulum. By using the data of the angular displacements of chaotic motion, random binary numerical arrays can be generated. To check the randomness of generated numerical arrays, the NIST Special Publication 800-20 method was adopted. As a result, it was found that all the random arrays which were generated by the chaotic motion could pass the validity criteria and some of them were even better than the quality of pseudo-random numbers generated by a computer. Through the experiments, it is demonstrated that chaotic pendulum can be used as an efficient mechanical facility in generating random numbers, and can be applied in teaching random motion to the students.

  10. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  11. Factors associated with attrition from a randomized controlled trial of meaning-centered group psychotherapy for patients with advanced cancer

    PubMed Central

    Applebaum, Allison J.; Lichtenthal, Wendy G.; Pessin, Hayley A.; Radomski, Julia N.; Gökbayrak, N. Simay; Katz, Aviva M.; Rosenfeld, Barry; Breitbart, William

    2013-01-01

    Objective The generalizability of palliative care intervention research is often limited by high rates of study attrition. This study examined factors associated with attrition from a randomized controlled trial comparing meaning-centered group psychotherapy (MCGP), an intervention designed to help advanced cancer patients sustain or enhance their sense of meaning to the supportive group psychotherapy (SGP), a standardized support group. Methods Patients with advanced solid tumor cancers (n = 153) were randomized to eight sessions of either the MCGP or SGP. They completed assessments of psychosocial, spiritual, and physical well-being pretreatment, midtreatment, and 2 months post-treatment. Attrition was assessed in terms of the percent of participants who failed to complete these assessments, and demographic, psychiatric, medical, and study-related correlates of attrition were examined for the participants in each of these categories. Results The rates of attrition at these time points were 28.1%, 17.7%, and 11.1%, respectively; 43.1% of the participants (66 of 153) completed the entire study. The most common reason for dropout was patients feeling too ill. Attrition rates did not vary significantly between study arms. The participants who dropped out pretreatment reported less financial concerns than post-treatment dropouts, and the participants who dropped out of the study midtreatment had poorer physical health than treatment completers. There were no other significant associations between attrition and any demographic, medical, psychiatric, or study-related variables. Conclusions These findings highlight the challenge of maintaining advanced cancer patients in longitudinal research and suggest the need to consider alternative approaches (e.g., telemedicine) for patients who might benefit from group interventions but are too ill to travel. PMID:21751295

  12. Comparison of ventilator-associated pneumonia (VAP) rates between different ICUs: Implications of a zero VAP rate.

    PubMed

    Sundar, Krishna M; Nielsen, David; Sperry, Paul

    2012-02-01

    Ventilator-associated pneumonia (VAP) is associated with significant morbidity and mortality. Measures to reduce the incidence of VAP have resulted in institutions reporting a zero or near-zero VAP rates. The implications of zero VAP rates are unclear. This study was done to compare outcomes between two intensive care units (ICU) with one of them reporting a zero VAP rate. This study retrospectively compared VAP rates between two ICUs: Utah Valley Regional Medical Center (UVRMC) with 25 ICU beds and American Fork Hospital (AFH) with 9 ICU beds. Both facilities are under the same management and attended by a single group of intensivists. Both ICUs have similar nursing and respiratory staffing patterns. Both ICUs use the same intensive care program for reduction of VAP rates. ICU outcomes between AFH (reporting zero VAP rate) and UVRMC (VAP rate of 2.41/1000 ventilator days) were compared for the years 2007-2008. UVRMC VAP rates during 2007 and 2008 were 2.31/1000 ventilator days and 2.5/1000 ventilator days respectively compared to a zero VAP rate at AFH. The total days of ventilation, mean days of ventilation per patient and mean duration of ICU stay per patient was higher in the UVRMC group as compared to AFH ICU group. There was no significant difference in mean age and APACHE II score between ICU patients at UVRMC and AFH. There was no statistical difference in rates of VAP and mortality between UVRMC and AFH. During comparisons of VAP rate between institutions, a zero VAP rate needs to be considered in the context of overall ventilator days, mean durations of ventilator stay and ICU mortality. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Heritability of Intraindividual Mean and Variability of Positive and Negative Affect.

    PubMed

    Zheng, Yao; Plomin, Robert; von Stumm, Sophie

    2016-12-01

    Positive affect (e.g., attentiveness) and negative affect (e.g., upset) fluctuate over time. We examined genetic influences on interindividual differences in the day-to-day variability of affect (i.e., ups and downs) and in average affect over the duration of a month. Once a day, 17-year-old twins in the United Kingdom ( N = 447) rated their positive and negative affect online. The mean and standard deviation of each individual's daily ratings across the month were used as the measures of that individual's average affect and variability of affect. Analyses revealed that the average of negative affect was significantly heritable (.53), but the average of positive affect was not; instead, the latter showed significant shared environmental influences (.42). Fluctuations across the month were significantly heritable for both negative affect (.54) and positive affect (.34). The findings support the two-factor theory of affect, which posits that positive affect is more situational and negative affect is more dispositional.

  14. Heritability of Intraindividual Mean and Variability of Positive and Negative Affect

    PubMed Central

    Zheng, Yao; Plomin, Robert; von Stumm, Sophie

    2016-01-01

    Positive affect (e.g., attentiveness) and negative affect (e.g., upset) fluctuate over time. We examined genetic influences on interindividual differences in the day-to-day variability of affect (i.e., ups and downs) and in average affect over the duration of a month. Once a day, 17-year-old twins in the United Kingdom (N = 447) rated their positive and negative affect online. The mean and standard deviation of each individual’s daily ratings across the month were used as the measures of that individual’s average affect and variability of affect. Analyses revealed that the average of negative affect was significantly heritable (.53), but the average of positive affect was not; instead, the latter showed significant shared environmental influences (.42). Fluctuations across the month were significantly heritable for both negative affect (.54) and positive affect (.34). The findings support the two-factor theory of affect, which posits that positive affect is more situational and negative affect is more dispositional. PMID:27729566

  15. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols.

    PubMed

    Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N

    2018-05-01

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.

  16. A Comparison of Zero Mean Strain Rotating Beam Fatigue Test Methods for Nitinol Wire

    NASA Astrophysics Data System (ADS)

    Norwich, Dennis W.

    2014-07-01

    Zero mean strain rotating beam fatigue testing has become the standard for comparing the fatigue properties of Nitinol wire. Most commercially available equipment consists of either a two-chuck or a chuck and bushing system, where the wire length and center-to-center axis distance determine the maximum strain on the wire. For the two-chuck system, the samples are constrained at either end of the wire, and both chucks are driven at the same speed. For the chuck and bushing system, the sample is constrained at one end in a chuck and rides freely in a bushing at the other end. These equivalent systems will both be herein referred to as Chuck-to-Chuck systems. An alternate system uses a machined test block with a specific radius to guide the wire at a known strain during testing. In either system, the test parts can be immersed in a temperature-controlled fluid bath to eliminate any heating effect created in the specimen due to dissipative processes during cyclic loading (cyclic stress induced the formation of martensite) Wagner et al. ( Mater. Sci. Eng. A, 378, p 105-109, 1). This study will compare the results of the same starting material tested with each system to determine if the test system differences affect the final results. The advantages and disadvantages of each system will be highlighted and compared. The factors compared will include ease of setup, operator skill level required, consistency of strain measurement, equipment test limits, and data recovery and analysis. Also, the effect of test speed on the test results for each system will be investigated.

  17. Mechanisms of long-term mean sea level variability in the North Sea

    NASA Astrophysics Data System (ADS)

    Dangendorf, Sönke; Calafat, Francisco; Øie Nilsen, Jan Even; Richter, Kristin; Jensen, Jürgen

    2015-04-01

    We examine mean sea level (MSL) variations in the North Sea on timescales ranging from months to decades under the consideration of different forcing factors since the late 19th century. We use multiple linear regression models, which are validated for the second half of the 20th century against the output of a state-of-the-art tide+surge model (HAMSOM), to determine the barotropic response of the ocean to fluctuations in atmospheric forcing. We demonstrate that local atmospheric forcing mainly triggers MSL variability on timescales up to a few years, with the inverted barometric effect dominating the variability along the UK and Norwegian coastlines and wind (piling up the water along the coast) controlling the MSL variability in the south from Belgium up to Denmark. However, in addition to the large inter-annual sea level variability there is also a considerable fraction of decadal scale variability. We show that on decadal timescales MSL variability in the North Sea mainly reflects steric changes, which are mostly remotely forced. A spatial correlation analysis of altimetry observations and baroclinic ocean model outputs suggests evidence for a coherent signal extending from the Norwegian shelf down to the Canary Islands. This supports the theory of longshore wind forcing along the eastern boundary of the North Atlantic causing coastally trapped waves to propagate along the continental slope. With a combination of oceanographic and meteorological measurements we demonstrate that ~80% of the decadal sea level variability in the North Sea can be explained as response of the ocean to longshore wind forcing, including boundary wave propagation in the Northeast Atlantic. These findings have important implications for (i) detecting significant accelerations in North Sea MSL, (ii) the conceptual set up of regional ocean models in terms of resolution and boundary conditions, and (iii) the development of adequate and realistic regional climate change projections.

  18. ZERO-G - Crippen, Robert L.

    NASA Image and Video Library

    1979-04-03

    Zero-gravity experiments in KC-135 conducted by John Young, Robert L. Crippen, Joseph Kerwin, and Margaret Seddon. 1. Kerwin, Joseph - Zero-G 2. Seddon, Margaret - Zero-G 3. Young, John - Zero-G 4. Aircraft - KC-135

  19. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  20. New approach application of data transformation in mean centering of ratio spectra method

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.

    2015-05-01

    Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.

  1. Generating and controlling homogeneous air turbulence using random jet arrays

    NASA Astrophysics Data System (ADS)

    Carter, Douglas; Petersen, Alec; Amili, Omid; Coletti, Filippo

    2016-12-01

    The use of random jet arrays, already employed in water tank facilities to generate zero-mean-flow homogeneous turbulence, is extended to air as a working fluid. A novel facility is introduced that uses two facing arrays of individually controlled jets (256 in total) to force steady homogeneous turbulence with negligible mean flow, shear, and strain. Quasi-synthetic jet pumps are created by expanding pressurized air through small straight nozzles and are actuated by fast-response low-voltage solenoid valves. Velocity fields, two-point correlations, energy spectra, and second-order structure functions are obtained from 2D PIV and are used to characterize the turbulence from the integral-to-the Kolmogorov scales. Several metrics are defined to quantify how well zero-mean-flow homogeneous turbulence is approximated for a wide range of forcing and geometric parameters. With increasing jet firing time duration, both the velocity fluctuations and the integral length scales are augmented and therefore the Reynolds number is increased. We reach a Taylor-microscale Reynolds number of 470, a large-scale Reynolds number of 74,000, and an integral-to-Kolmogorov length scale ratio of 680. The volume of the present homogeneous turbulence, the largest reported to date in a zero-mean-flow facility, is much larger than the integral length scale, allowing for the natural development of the energy cascade. The turbulence is found to be anisotropic irrespective of the distance between the jet arrays. Fine grids placed in front of the jets are effective at modulating the turbulence, reducing both velocity fluctuations and integral scales. Varying the jet-to-jet spacing within each array has no effect on the integral length scale, suggesting that this is dictated by the length scale of the jets.

  2. Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field

    NASA Astrophysics Data System (ADS)

    Borelli, M. E. S.; Carneiro, C. E. I.

    1996-02-01

    We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.

  3. Simulation Analysis of Zero Mean Flow Edge Turbulence in LAPD

    NASA Astrophysics Data System (ADS)

    Friedman, Brett Cory

    I model, simulate, and analyze the turbulence in a particular experiment on the Large Plasma Device (LAPD) at UCLA. The experiment, conducted by Schaffner et al. [D. Schaffner et al., Phys. Rev. Lett. 109, 135002 (2012)], nulls out the intrinsic mean flow in LAPD by limiter biasing. The model that I use in the simulation is an electrostatic reduced Braginskii two-fluid model that describes the time evolution of density, electron temperature, electrostatic potential, and parallel electron velocity fluctuations in the edge region of LAPD. The spatial domain is annular, encompassing the radial coordinates over which a significant equilibrium density gradient exists. My model breaks the independent variables in the equations into time-independent equilibrium parts and time-dependent fluctuating parts, and I use experimentally obtained values as input for the equilibrium parts. After an initial exponential growth period due to a linear drift wave instability, the fluctuations saturate and the frequency and azimuthal wavenumber spectra become broadband with no visible coherent peaks, at which point the fluctuations become turbulent. The turbulence develops intermittent pressure and flow filamentary structures that grow and dissipate, but look much different than the unstable linear drift waves, primarily in the extremely long axial wavelengths that the filaments possess. An energy dynamics analysis that I derive reveals the mechanism that drives these structures. The long k|| ˜ 0 intermittent potential filaments convect equilibrium density across the equilibrium density gradient, setting up local density filaments. These density filaments, also with k || ˜ 0, produce azimuthal density gradients, which drive radially propagating secondary drift waves. These finite k|| drift waves nonlinearly couple to one another and reinforce the original convective filament, allowing the process to bootstrap itself. The growth of these structures is by nonlinear instability because

  4. Robust inference in summary data Mendelian randomization via the zero modal pleiotropy assumption.

    PubMed

    Hartwig, Fernando Pires; Davey Smith, George; Bowden, Jack

    2017-12-01

    Mendelian randomization (MR) is being increasingly used to strengthen causal inference in observational studies. Availability of summary data of genetic associations for a variety of phenotypes from large genome-wide association studies (GWAS) allows straightforward application of MR using summary data methods, typically in a two-sample design. In addition to the conventional inverse variance weighting (IVW) method, recently developed summary data MR methods, such as the MR-Egger and weighted median approaches, allow a relaxation of the instrumental variable assumptions. Here, a new method - the mode-based estimate (MBE) - is proposed to obtain a single causal effect estimate from multiple genetic instruments. The MBE is consistent when the largest number of similar (identical in infinite samples) individual-instrument causal effect estimates comes from valid instruments, even if the majority of instruments are invalid. We evaluate the performance of the method in simulations designed to mimic the two-sample summary data setting, and demonstrate its use by investigating the causal effect of plasma lipid fractions and urate levels on coronary heart disease risk. The MBE presented less bias and lower type-I error rates than other methods under the null in many situations. Its power to detect a causal effect was smaller compared with the IVW and weighted median methods, but was larger than that of MR-Egger regression, with sample size requirements typically smaller than those available from GWAS consortia. The MBE relaxes the instrumental variable assumptions, and should be used in combination with other approaches in sensitivity analyses. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association

  5. Regression Analysis with Dummy Variables: Use and Interpretation.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Oliver, J. Dale

    1986-01-01

    Multiple regression analysis (MRA) may be used when both continuous and categorical variables are included as independent research variables. The use of MRA with categorical variables involves dummy coding, that is, assigning zeros and ones to levels of categorical variables. Caution is urged in results interpretation. (Author/CH)

  6. Do Zero-Cost Workers’ Compensation Medical Claims Really Have Zero Costs?

    PubMed Central

    Asfaw, Abay; Rosa, Roger; Mao, Rebecca

    2015-01-01

    Objective Previous research suggests that non–workers’ compensation (WC) insurance systems, such as group health insurance (GHI), Medicare, or Medicaid, at least partially cover work-related injury and illness costs. This study further examined GHI utilization and costs. Methods Using two-part model, we compared those outcomes immediately after injuries for which accepted WC medical claims made zero or positive medical payments. Results Controlling for pre-injury GHI utilization and costs and other covariates, our results indicated that post-injury GHI utilization and costs increased regardless of whether a WC medical claim was zero or positive. The increases were highest for zero-cost WC medical claims. Conclusion Our national estimates showed that zero-cost WC medical claims alone could cost the GHI $212 million per year. PMID:24316724

  7. The turbulent mean-flow, Reynolds-stress, and heat flux equations in mass-averaged dependent variables

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Rose, W. C.

    1973-01-01

    The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.

  8. A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco

    2016-04-01

    We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.

  9. Evaluation of the Use of Zero-Augmented Regression Techniques to Model Incidence of Campylobacter Infections in FoodNet.

    PubMed

    Tremblay, Marlène; Crim, Stacy M; Cole, Dana J; Hoekstra, Robert M; Henao, Olga L; Döpfer, Dörte

    2017-10-01

    The Foodborne Diseases Active Surveillance Network (FoodNet) is currently using a negative binomial (NB) regression model to estimate temporal changes in the incidence of Campylobacter infection. FoodNet active surveillance in 483 counties collected data on 40,212 Campylobacter cases between years 2004 and 2011. We explored models that disaggregated these data to allow us to account for demographic, geographic, and seasonal factors when examining changes in incidence of Campylobacter infection. We hypothesized that modeling structural zeros and including demographic variables would increase the fit of FoodNet's Campylobacter incidence regression models. Five different models were compared: NB without demographic covariates, NB with demographic covariates, hurdle NB with covariates in the count component only, hurdle NB with covariates in both zero and count components, and zero-inflated NB with covariates in the count component only. Of the models evaluated, the nonzero-augmented NB model with demographic variables provided the best fit. Results suggest that even though zero inflation was not present at this level, individualizing the level of aggregation and using different model structures and predictors per site might be required to correctly distinguish between structural and observational zeros and account for risk factors that vary geographically.

  10. A Comparison of Zero-Profile Devices and Artificial Cervical Disks in Patients With 2 Noncontiguous Levels of Cervical Spondylosis.

    PubMed

    Qizhi, Sun; Lei, Sun; Peijia, Li; Hanping, Zhao; Hongwei, Hu; Junsheng, Chen; Jianmin, Li

    2016-03-01

    A prospective randomized and controlled study of 30 patients with 2 noncontiguous levels of cervical spondylosis. To compare the clinical outcome between zero-profile devices and artificial cervical disks for noncontiguous cervical spondylosis. Noncontiguous cervical spondylosis is an especial degenerative disease of the cervical spine. Some controversy exists over the choice of surgical procedure and fusion levels for it because of the viewpoint that the stress at levels adjacent to a fusion mass will increase. The increased stress will lead to the adjacent segment degeneration (ASD). According to the viewpoint, the intermediate segment will bear more stress after both superior and inferior segments' fusion. Cervical disk arthroplasty is an alternative to fusion because of its motion-preserving. Few comparative studies have been conducted on arthrodesis with zero-prolife devices and arthroplasty with artificial cervical disks for noncontiguous cervical spondylosis. Thirty patients with 2 noncontiguous levels of cervical spondylosis were enrolled and assigned to either group A (receiving arthroplasty using artificial cervical disks) and group Z (receiving arthrodesis using zero-profile devices). The clinical outcomes were assessed by the mean operative time, blood loss, Japanese Orthopedic Association (JOA) score, Neck Dysfunction Index (NDI), cervical lordosis, fusion rate, and complications. The mean follow-up was 32.4 months. There were no significant differences between the 2 groups in the blood loss, JOA score, NDI score, and cervical lordosis except operative time. The mean operative time of group A was shorter than that of group Z. Both the 2 groups demonstrated a significant increase in JOA score, NDI score, and cervical lordosis. The fusion rate was 100% at 12 months postoperatively in group Z. There was no significant difference between the 2 groups in complications except the ASD. Three patients had radiologic ASD at the final follow-up in group Z, and

  11. Energy density and variability in abundance of pigeon guillemot prey: Support for the quality-variability trade-off hypothesis

    USGS Publications Warehouse

    Litzow, Michael A.; Piatt, John F.; Abookire, Alisa A.; Robards, Martin D.

    2004-01-01

    1. The quality-variability trade-off hypothesis predicts that (i) energy density (kJ g-1) and spatial-temporal variability in abundance are positively correlated in nearshore marine fishes; and (ii) prey selection by a nearshore piscivore, the pigeon guillemot (Cepphus columba Pallas), is negatively affected by variability in abundance. 2. We tested these predictions with data from a 4-year study that measured fish abundance with beach seines and pigeon guillemot prey utilization with visual identification of chick meals. 3. The first prediction was supported. Pearson's correlation showed that fishes with higher energy density were more variable on seasonal (r = 0.71) and annual (r = 0.66) time scales. Higher energy density fishes were also more abundant overall (r = 0.85) and more patchy at a scale of 10s of km (r = 0.77). 4. Prey utilization by pigeon guillemots was strongly non-random. Relative preference, defined as the difference between log-ratio transformed proportions of individual prey taxa in chick diets and beach seine catches, was significantly different from zero for seven of the eight main prey categories. 5. The second prediction was also supported. We used principal component analysis (PCA) to summarize variability in correlated prey characteristics (energy density, availability and variability in abundance). Two PCA scores explained 32% of observed variability in pigeon guillemot prey utilization. Seasonal variability in abundance was negatively weighted by these PCA scores, providing evidence of risk-averse selection. Prey availability, energy density and km-scale variability in abundance were positively weighted. 6. Trophic interactions are known to create variability in resource distribution in other systems. We propose that links between resource quality and the strength of trophic interactions may produce resource quality-variability trade-offs.

  12. Time-variant random interval natural frequency analysis of structures

    NASA Astrophysics Data System (ADS)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  13. Effect of platykurtic and leptokurtic distributions in the random-field Ising model: mean-field approach.

    PubMed

    Duarte Queirós, Sílvio M; Crokidakis, Nuno; Soares-Pinto, Diogo O

    2009-07-01

    The influence of the tail features of the local magnetic field probability density function (PDF) on the ferromagnetic Ising model is studied in the limit of infinite range interactions. Specifically, we assign a quenched random field whose value is in accordance with a generic distribution that bears platykurtic and leptokurtic distributions depending on a single parameter tau<3 to each site. For tau<5/3, such distributions, which are basically Student-t and r distribution extended for all plausible real degrees of freedom, present a finite standard deviation, if not the distribution has got the same asymptotic power-law behavior as a alpha-stable Lévy distribution with alpha=(3-tau)/(tau-1). For every value of tau, at specific temperature and width of the distribution, the system undergoes a continuous phase transition. Strikingly, we impart the emergence of an inflexion point in the temperature-PDF width phase diagrams for distributions broader than the Cauchy-Lorentz (tau=2) which is accompanied with a divergent free energy per spin (at zero temperature).

  14. Synthesis, Characterization and Reactivity of Nanostructured Zero-Valent Iron Particles for Degradation of Azo Dyes

    NASA Astrophysics Data System (ADS)

    Mikhailov, Ivan; Levina, Vera; Leybo, Denis; Masov, Vsevolod; Tagirov, Marat; Kuznetsov, Denis

    Nanostructured zero-valent iron (NSZVI) particles were synthesized by the method of ferric ion reduction with sodium borohydride with subsequent drying and passivation at room temperature in technical grade nitrogen. The obtained sample was characterized by means of X-ray powder diffraction, scanning electron microscopy, transmission electron microscopy and dynamic light scattering studies. The prepared NSZVI particles represent 100-200nm aggregates, which consist of 20-30nm iron nanoparticles in zero-valent oxidation state covered by thin oxide shell. The reactivity of the NSZVI sample, as the removal efficiency of refractory azo dyes, was investigated in this study. Two azo dye compounds, namely, orange G and methyl orange, are commonly detected in waste water of textile production. Experimental variables such as NSZVI dosage, initial dye concentration and solution pH were investigated. The kinetic rates of degradation of both dyes by NSZVI increased with the decrease of solution pH from 10 to 3 and with the increase of NSZVI dosage, but decreased with the increase of initial dye concentration. The removal efficiencies achieved for both orange G and methyl orange were higher than 90% after 80min of treatment.

  15. Zero ischemia anatomical partial nephrectomy: a novel approach.

    PubMed

    Gill, Inderbir S; Patil, Mukul B; Abreu, Andre Luis de Castro; Ng, Casey; Cai, Jie; Berger, Andre; Eisenberg, Manuel S; Nakamoto, Masahiko; Ukimura, Osamu; Goh, Alvin C; Thangathurai, Duraiyah; Aron, Monish; Desai, Mihir M

    2012-03-01

    We present a novel concept of zero ischemia anatomical robotic and laparoscopic partial nephrectomy. Our technique primarily involves anatomical vascular microdissection and preemptive control of tumor specific, tertiary or higher order renal arterial branch(es) using neurosurgical aneurysm micro-bulldog clamps. In 58 consecutive patients the majority (70%) had anatomically complex tumors including central (67%), hilar (26%), completely intrarenal (23%), pT1b (18%) and solitary kidney (7%). Data were prospectively collected and analyzed from an institutional review board approved database. Of 58 cases undergoing zero ischemia robotic (15) or laparoscopic (43) partial nephrectomy, 57 (98%) were completed without hilar clamping. Mean tumor size was 3.2 cm, mean ± SD R.E.N.A.L. score 7.0 ± 1.9, C-index 2.9 ± 2.4, operative time 4.4 hours, blood loss 206 cc and hospital stay 3.9 days. There were no intraoperative complications. Postoperative complications (22.8%) were low grade (Clavien grade 1 to 2) in 19.3% and high grade (Clavien grade 3 to 5) in 3.5%. All patients had negative cancer surgical margins (100%). Mean absolute and percent change in preoperative vs 4-month postoperative serum creatinine (0.2 mg/dl, 18%), estimated glomerular filtration rate (-11.4 ml/minute/1.73 m(2), 13%), and ipsilateral kidney function on radionuclide scanning at 6 months (-10%) correlated with mean percent kidney excised intraoperatively (18%). Although 21% of patients received a perioperative blood transfusion, no patient had acute or delayed renal hemorrhage, or lost a kidney. The concept of zero ischemia robotic and laparoscopic partial nephrectomy is presented. This anatomical vascular microdissection of the artery first and then tumor allows even complex tumors to be excised without hilar clamping. Global surgical renal ischemia is unnecessary for the majority of patients undergoing robotic and laparoscopic partial nephrectomy at our institution. Copyright © 2012 American

  16. Mechanisms of Zero-Lag Synchronization in Cortical Motifs

    PubMed Central

    Gollo, Leonardo L.; Mirasso, Claudio; Sporns, Olaf; Breakspear, Michael

    2014-01-01

    Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomenon of “dynamical relaying” – a mechanism that relies on a specific network motif – has proven to be the most robust with respect to parameter mismatch and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair – a “resonance pair” – plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying) from those that do not (such as the common driving triad). Remarkably, minor structural changes to the common driving motif that incorporate a reciprocal pair recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization, outline the conditions for its occurrence, and propose that it may be a general mechanism to promote zero-lag synchrony in the brain. PMID:24763382

  17. Numerical analysis of nonminimum phase zero for nonuniform link design

    NASA Technical Reports Server (NTRS)

    Girvin, Douglas L.; Book, Wayne J.

    1991-01-01

    As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links becomes more of an issue in control. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. This research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixed, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to the inverse dynamic algorithms that are currently under development.

  18. Numerical analysis of nonminimum phase zero for nonuniform link design

    NASA Astrophysics Data System (ADS)

    Girvin, Douglas L.; Book, Wayne J.

    1991-11-01

    As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links becomes more of an issue in control. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. This research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixed, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to the inverse dynamic algorithms that are currently under development.

  19. A time to search: finding the meaning of variable activation energy.

    PubMed

    Vyazovkin, Sergey

    2016-07-28

    This review deals with the phenomenon of variable activation energy frequently observed when studying the kinetics in the liquid or solid phase. This phenomenon commonly manifests itself through nonlinear Arrhenius plots or dependencies of the activation energy on conversion computed by isoconversional methods. Variable activation energy signifies a multi-step process and has a meaning of a collective parameter linked to the activation energies of individual steps. It is demonstrated that by using appropriate models of the processes, the link can be established in algebraic form. This allows one to analyze experimentally observed dependencies of the activation energy in a quantitative fashion and, as a result, to obtain activation energies of individual steps, to evaluate and predict other important parameters of the process, and generally to gain deeper kinetic and mechanistic insights. This review provides multiple examples of such analysis as applied to the processes of crosslinking polymerization, crystallization and melting of polymers, gelation, and solid-solid morphological and glass transitions. The use of appropriate computational techniques is discussed as well.

  20. Phenomenological picture of fluctuations in branching random walks

    NASA Astrophysics Data System (ADS)

    Mueller, A. H.; Munier, S.

    2014-10-01

    We propose a picture of the fluctuations in branching random walks, which leads to predictions for the distribution of a random variable that characterizes the position of the bulk of the particles. We also interpret the 1 /√{t } correction to the average position of the rightmost particle of a branching random walk for large times t ≫1 , computed by Ebert and Van Saarloos, as fluctuations on top of the mean-field approximation of this process with a Brunet-Derrida cutoff at the tip that simulates discreteness. Our analytical formulas successfully compare to numerical simulations of a particular model of a branching random walk.

  1. Welfare analysis of a zero-smoking policy - A case study in Japan.

    PubMed

    Nakamura, Yuuki; Takahashi, Kenzo; Nomura, Marika; Kamei, Miwako

    2018-03-19

    Smoking cessation efforts in Japan reduce smoking rates. A future zero-smoking policy would completely prohibit smoking (0% rate). We therefore analyzed the social welfare of smokers and non-smokers under a hypothetical zero-smoking policy. The demand curve for smoking from 1990 to 2014 was estimated by defining quantity as the number of cigarettes smoked and price as total tobacco sales/total cigarettes smoked by the two-stage least squares method using the tax on tobacco as the instrumental variable. In the estimation equation (calculated using the ordinary least squares method), the price of tobacco was the dependent variable and tobacco quantity the explanatory variable. The estimated constant was 31.90, the estimated coefficient of quantity was - 0.0061 (both, p < 0.0004), and the determinant coefficient was 0.9187. Thus, the 2015 consumer surplus was 1.08 trillion yen (US$ 9.82 billion) (95% confidence interval (CI), 889 billion yen (US$ 8.08 billion) - 1.27 trillion yen (US$ 11.6 billion)). Because tax revenue from tobacco in 2011 was 2.38 trillion yen (US$ 21.6 billion), the estimated deadweight loss if smoking were prohibited in 2014 was 3.31 trillion yen (US$ 30.2 billion) (95% CI, 3.13 trillion yen (US$ 28.5 billion) - 3.50 trillion yen (US$ 31.8 billion)), representing a deadweight loss about 0.6 trillion yen (US$ 5.45 billion) below the 2014 disease burden (4.10-4.12 trillion yen (US$ 37.3-37.5 billion)). We conclude that a zero-smoking policy would improve social welfare in Japan.

  2. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  3. Residual Defect Density in Random Disks Deposits.

    PubMed

    Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A C

    2015-08-03

    We investigate the residual distribution of structural defects in very tall packings of disks deposited randomly in large channels. By performing simulations involving the sedimentation of up to 50 × 10(9) particles we find all deposits to consistently show a non-zero residual density of defects obeying a characteristic power-law as a function of the channel width. This remarkable finding corrects the widespread belief that the density of defects should vanish algebraically with growing height. A non-zero residual density of defects implies a type of long-range spatial order in the packing, as opposed to only local ordering. In addition, we find deposits of particles to involve considerably less randomness than generally presumed.

  4. Tropical Pacific Mean State and ENSO Variability across Marine Isotope Stage 3

    NASA Astrophysics Data System (ADS)

    Hertzberg, J. E.; Schmidt, M. W.; Marcantonio, F.; Bianchi, T. S.

    2017-12-01

    The El Niño/Southern Oscillation (ENSO) phenomenon is the largest natural interannual signal in the Earth's climate system and has widespread effects on global climate that impact millions of people worldwide. A series of recent research studies predict an increase in the frequency of extreme El Niño and La Niña events as Earth's climate continues to warm. In order for climate scientists to forecast how ENSO will evolve in response to global warming, it is necessary to have accurate, comprehensive records of how the system has naturally changed in the past, especially across past abrupt warming events. Nevertheless, there remains significant uncertainty about past changes in tropical Pacific climate and how ENSO variability relates to the millennial-scale warming events of the last ice age. This study aims to reconstruct changes in the tropical Pacific mean state and ENSO variability across Marine Isotope Stage 3 from a sediment core recovered from the Eastern Equatorial Pacific cold tongue (MV1014-02-17JC, 0°10.8' S, 85°52.0' W, 2846 m water depth). In this region, thermocline temperatures are significantly correlated to ENSO variability - thus, we analyzed Mg/Ca ratios in the thermocline dwelling foraminifera Neogloboquadrina dutertrei as a proxy for thermocline temperatures in the past. Bulk ( 50 tests/sample) foraminifera Mg/Ca temperatures are used to reconstruct long-term variability in the mean state, while single shell ( 1 test/sample, 60 samples) Mg/Ca analyses are used to assess thermocline temperature variance. Based on our refined age model, we find that thermocline temperature increases of up to 3.5°C occur in-step with interstadial warming events recorded in Greenland ice cores. Cooler thermocline temperatures prevail during stadial intervals and Heinrich Events. This suggests that interstadials were more El-Niño like, while stadials and Heinrich Events were more La-Niña like. These temperature changes are compared to new records of dust flux

  5. Characteristics of buoyancy force on stagnation point flow with magneto-nanoparticles and zero mass flux condition

    NASA Astrophysics Data System (ADS)

    Uddin, Iftikhar; Khan, Muhammad Altaf; Ullah, Saif; Islam, Saeed; Israr, Muhammad; Hussain, Fawad

    2018-03-01

    This attempt dedicated to the solution of buoyancy effect over a stretching sheet in existence of MHD stagnation point flow with convective boundary conditions. Thermophoresis and Brownian motion aspects are included. Incompressible fluid is electrically conducted in the presence of varying magnetic field. Boundary layer analysis is used to develop the mathematical formulation. Zero mass flux condition is considered at the boundary. Non-linear ordinary differential system of equations is constructed by means of proper transformations. Interval of convergence via numerical data and plots are developed. Characteristics of involved variables on the velocity, temperature and concentration distributions are sketched and discussed. Features of correlated parameters on Cf and Nu are examined by means of tables. It is found that buoyancy ratio and magnetic parameters increase and reduce the velocity field. Further opposite feature is noticed for higher values of thermophoresis and Brownian motion parameters on concentration distribution.

  6. Zero refractive index in time-Floquet acoustic metamaterials

    NASA Astrophysics Data System (ADS)

    Koutserimpas, Theodoros T.; Fleury, Romain

    2018-03-01

    New scientific investigations of artificially structured materials and experiments have exhibited wave manipulation to the extreme. In particular, zero refractive index metamaterials have been on the front line of wave physics research for their unique wave manipulation properties and application potentials. Remarkably, in such exotic materials, time-harmonic fields have an infinite wavelength and do not exhibit any spatial variations in their phase distribution. This unique feature can be achieved by forcing a Dirac cone to the center of the Brillouin zone ( Γ point), as previously predicted and experimentally demonstrated in time-invariant metamaterials by means of accidental degeneracy between three different modes. In this article, we propose a different approach that enables true conical dispersion at Γ with twofold degeneracy and generates zero index properties. We break time-reversal symmetry and exploit a time-Floquet modulation scheme to demonstrate a time-Floquet acoustic metamaterial with zero refractive index. This behavior, predicted using stroboscopic analysis, is confirmed by full-wave finite element simulations. Our results establish the relevance of time-Floquet metamaterials as a novel reconfigurable platform for wave control.

  7. ZERO SUPPRESSION FOR RECORDERS

    DOEpatents

    Fort, W.G.S.

    1958-12-30

    A zero-suppression circuit for self-balancing recorder instruments is presented. The essential elements of the circuit include a converter-amplifier having two inputs, one for a reference voltage and the other for the signal voltage under analysis, and a servomotor with two control windings, one coupled to the a-c output of the converter-amplifier and the other receiving a reference input. Each input circuit to the converter-amplifier has a variable potentiometer and the sliders of the potentiometer are ganged together for movement by the servoinotor. The particular noveity of the circuit resides in the selection of resistance values for the potentiometer and a resistor in series with the potentiometer of the signal circuit to ensure the full value of signal voltage variation is impressed on a recorder mechanism driven by servomotor.

  8. Net Zero Water Update

    DTIC Science & Technology

    2011-05-12

    www.epa.gov/nrmrl/pubs/600r09048/600r09048.pdf • http://www.epa.gov/awi/res_rehabilitation.html Net Zero Waste • http://www.army.mil/-news/2011/02...24/52403-net- zero - waste -goal-becoming-a-reality- at-jblm/ • http://www.operationfree.net/2011/04/11/u-s-army-looks-to-net- zero - waste / 27

  9. Stochastic effects in EUV lithography: random, local CD variability, and printing failures

    NASA Astrophysics Data System (ADS)

    De Bisschop, Peter

    2017-10-01

    Stochastic effects in lithography are usually quantified through local CD variability metrics, such as line-width roughness or local CD uniformity (LCDU), and these quantities have been measured and studied intensively, both in EUV and optical lithography. Next to the CD-variability, stochastic effects can also give rise to local, random printing failures, such as missing contacts or microbridges in spaces. When these occur, there often is no (reliable) CD to be measured locally, and then such failures cannot be quantified with the usual CD-measuring techniques. We have developed algorithms to detect such stochastic printing failures in regular line/space (L/S) or contact- or dot-arrays from SEM images, leading to a stochastic failure metric that we call NOK (not OK), which we consider a complementary metric to the CD-variability metrics. This paper will show how both types of metrics can be used to experimentally quantify dependencies of stochastic effects to, e.g., CD, pitch, resist, exposure dose, etc. As it is also important to be able to predict upfront (in the OPC verification stage of a production-mask tape-out) whether certain structures in the layout are likely to have a high sensitivity to stochastic effects, we look into the feasibility of constructing simple predictors, for both stochastic CD-variability and printing failure, that can be calibrated for the process and exposure conditions used and integrated into the standard OPC verification flow. Finally, we briefly discuss the options to reduce stochastic variability and failure, considering the entire patterning ecosystem.

  10. Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.; hide

    2001-01-01

    It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.

  11. Decadal variability in core surface flows deduced from geomagnetic observatory monthly means

    NASA Astrophysics Data System (ADS)

    Whaler, K. A.; Olsen, N.; Finlay, C. C.

    2016-10-01

    Monthly means of the magnetic field measurements at ground observatories are a key data source for studying temporal changes of the core magnetic field. However, when they are calculated in the usual way, contributions of external (magnetospheric and ionospheric) origin may remain, which make them less favourable for studying the field generated by dynamo action in the core. We remove external field predictions, including a new way of characterizing the magnetospheric ring current, from the data and then calculate revised monthly means using robust methods. The geomagnetic secular variation (SV) is calculated as the first annual differences of these monthly means, which also removes the static crustal field. SV time-series based on revised monthly means are much less scattered than those calculated from ordinary monthly means, and their variances and correlations between components are smaller. On the annual to decadal timescale, the SV is generated primarily by advection in the fluid outer core. We demonstrate the utility of the revised monthly means by calculating models of the core surface advective flow between 1997 and 2013 directly from the SV data. One set of models assumes flow that is constant over three months; such models exhibit large and rapid temporal variations. For models of this type, less complex flows achieve the same fit to the SV derived from revised monthly means than those from ordinary monthly means. However, those obtained from ordinary monthly means are able to follow excursions in SV that are likely to be external field contamination rather than core signals. Having established that we can find models that fit the data adequately, we then assess how much temporal variability is required. Previous studies have suggested that the flow is consistent with torsional oscillations (TO), solid body-like oscillations of fluid on concentric cylinders with axes aligned along the Earth's rotation axis. TO have been proposed to explain decadal

  12. Comparing statistical methods for analyzing skewed longitudinal count data with many zeros: an example of smoking cessation.

    PubMed

    Xie, Haiyi; Tao, Jill; McHugo, Gregory J; Drake, Robert E

    2013-07-01

    Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Experimental phase diagram of zero-bias conductance peaks in superconductor/semiconductor nanowire devices

    PubMed Central

    Chen, Jun; Yu, Peng; Stenger, John; Hocevar, Moïra; Car, Diana; Plissard, Sébastien R.; Bakkers, Erik P. A. M.; Stanescu, Tudor D.; Frolov, Sergey M.

    2017-01-01

    Topological superconductivity is an exotic state of matter characterized by spinless p-wave Cooper pairing of electrons and by Majorana zero modes at the edges. The first signature of topological superconductivity is a robust zero-bias peak in tunneling conductance. We perform tunneling experiments on semiconductor nanowires (InSb) coupled to superconductors (NbTiN) and establish the zero-bias peak phase in the space of gate voltage and external magnetic field. Our findings are consistent with calculations for a finite-length topological nanowire and provide means for Majorana manipulation as required for braiding and topological quantum bits. PMID:28913432

  14. [Predictors of mean blood glucose control and its variability in diabetic hospitalized patients].

    PubMed

    Sáenz-Abad, Daniel; Gimeno-Orna, José Antonio; Sierra-Bergua, Beatriz; Pérez-Calvo, Juan Ignacio

    2015-01-01

    This study was intended to assess the effectiveness and predictors factors of inpatient blood glucose control in diabetic patients admitted to medical departments. A retrospective, analytical cohort study was conducted on patients discharged from internal medicine with a diagnosis related to diabetes. Variables collected included demographic characteristics, clinical data and laboratory parameters related to blood glucose control (HbA1c, basal plasma glucose, point-of-care capillary glucose). The cumulative probability of receiving scheduled insulin regimens was evaluated using Kaplan-Meier analysis. Multivariate regression models were used to select predictors of mean inpatient glucose (MHG) and glucose variability (standard deviation [GV]). The study sample consisted of 228 patients (mean age 78.4 (SD 10.1) years, 51% women). Of these, 96 patients (42.1%) were treated with sliding-scale regular insulin only. Median time to start of scheduled insulin therapy was 4 (95% CI, 2-6) days. Blood glucose control measures were: MIG 181.4 (SD 41.7) mg/dL, GV 56.3 (SD 22.6). The best model to predict MIG (R(2): .376; P<.0001) included HbA1c (b=4.96; P=.011), baseline plasma glucose (b=.056; P=.084), mean capillary blood glucose in the first 24hours (b=.154; P<.0001), home treatment (versus oral agents) with basal insulin only (b=13.1; P=.016) or more complex (pre-mixed insulin or basal-bolus) regimens (b=19.1; P=.004), corticoid therapy (b=14.9; P=.002), and fasting on admission (b=10.4; P=.098). Predictors of inpatient blood glucose control which should be considered in the design of DM management protocols include home treatment, HbA1c, basal plasma glucose, mean blood glucose in the first 24hours, fasting, and corticoid therapy. Copyright © 2014 SEEN. Published by Elsevier España, S.L.U. All rights reserved.

  15. A Random Variable Approach to Nuclear Targeting and Survivability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Undem, Halvor A.

    We demonstrate a common mathematical formalism for analyzing problems in nuclear survivability and targeting. This formalism, beginning with a random variable approach, can be used to interpret past efforts in nuclear-effects analysis, including targeting analysis. It can also be used to analyze new problems brought about by the post Cold War Era, such as the potential effects of yield degradation in a permanently untested nuclear stockpile. In particular, we illustrate the formalism through four natural case studies or illustrative problems, linking these to actual past data, modeling, and simulation, and suggesting future uses. In the first problem, we illustrate themore » case of a deterministically modeled weapon used against a deterministically responding target. Classic "Cookie Cutter" damage functions result. In the second problem, we illustrate, with actual target test data, the case of a deterministically modeled weapon used against a statistically responding target. This case matches many of the results of current nuclear targeting modeling and simulation tools, including the result of distance damage functions as complementary cumulative lognormal functions in the range variable. In the third problem, we illustrate the case of a statistically behaving weapon used against a deterministically responding target. In particular, we show the dependence of target damage on weapon yield for an untested nuclear stockpile experiencing yield degradation. Finally, and using actual unclassified weapon test data, we illustrate in the fourth problem the case of a statistically behaving weapon used against a statistically responding target.« less

  16. VARIABLE CHARGE SOILS: MINERALOGY AND CHEMISTRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Ranst, Eric; Qafoku, Nikolla; Noble, Andrew

    2016-09-19

    Soils rich in particles with amphoteric surface properties in the Oxisols, Ultisols, Alfisols, Spodosols and Andisols orders (1) are considered to be variable charge soils (2) (Table 1). The term “variable charge” is used to describe organic and inorganic soil constituents with reactive surface groups whose charge varies with pH and ionic concentration and composition of the soil solution. Such groups are the surface carboxyl, phenolic and amino functional groups of organic materials in soils, and surface hydroxyl groups of Fe and Al oxides, allophane and imogolite. The hydroxyl surface groups are also present on edges of some phyllosilicate mineralsmore » such as kaolinite, mica, and hydroxyl-interlayered vermiculite. The variable charge is developed on the surface groups as a result of adsorption or desorption of ions that are constituents of the solid phase, i.e., H+, and the adsorption or desorption of solid-unlike ions that are not constituents of the solid phase. Highly weathered soils and subsoils (e.g., Oxisols and some Ultisols, Alfisols and Andisols) may undergo isoelectric weathering and reach a “zero net charge” stage during their development. They usually have a slightly acidic to acidic soil solution pH, which is close to either the point of zero net charge (PZNC) (3) or the point of zero salt effect (PZSE) (3). They are characterized by high abundances of minerals with a point of zero net proton charge (PZNPC) (3) at neutral and slightly basic pHs; the most important being Fe and Al oxides and allophane. Under acidic conditions, the surfaces of these minerals are net positively charged. In contrast, the surfaces of permanent charge phyllosilicates are negatively charged regardless of ambient conditions. Variable charge soils therefore, are heterogeneous charge systems.« less

  17. Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates

    USGS Publications Warehouse

    Gray, B.R.

    2005-01-01

    The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively

  18. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process.

    PubMed

    Wilson, Lorna R M; Hopcraft, Keith I

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  19. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process

    NASA Astrophysics Data System (ADS)

    Wilson, Lorna R. M.; Hopcraft, Keith I.

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  20. Zero-Based Budgeting.

    ERIC Educational Resources Information Center

    Wichowski, Chester

    1979-01-01

    The zero-based budgeting approach is designed to achieve the greatest benefit with the fewest undesirable consequences. Seven basic steps make up the zero-based decision-making process: (1) identifying program goals, (2) classifying goals, (3) identifying resources, (4) reviewing consequences, (5) developing decision packages, (6) implementing a…

  1. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  2. Steady performance of a zero valent iron packed anaerobic reactor for azo dye wastewater treatment under variable influent quality.

    PubMed

    Zhang, Yaobin; Liu, Yiwen; Jing, Yanwen; Zhao, Zhiqiang; Quan, Xie

    2012-01-01

    Zero valent iron (ZVI) is expected to help create an enhanced anaerobic environment that might improve the performance of anaerobic treatment. Based on this idea, a novel ZVI packed upflow anaerobic sludge blanket (ZVI-UASB) reactor was developed to treat azo dye wastewater with variable influent quality. The results showed that the reactor was less influenced by increases of Reactive Brilliant Red X-3B concentration from 50 to 1000 mg/L and chemical oxygen demand (COD) from 1000 to 7000 mg/L in the feed than a reference UASB reactor without the ZVI. The ZVI decreased oxidation-reduction potential in the reactor by about 80 mV. Iron ion dissolution from the ZVI could buffer acidity in the reactor, the amount of which was related to the COD concentration. Fluorescence in situ hybridization test showed the abundance of methanogens in the sludge of the ZVI-UASB reactor was significantly greater than that of the reference one. Denaturing gradient gel electrophoresis showed that the ZVI increased the diversity of microbial strains responsible for high efficiency.

  3. Temporal changes in randomness of bird communities across Central Europe.

    PubMed

    Renner, Swen C; Gossner, Martin M; Kahl, Tiemo; Kalko, Elisabeth K V; Weisser, Wolfgang W; Fischer, Markus; Allan, Eric

    2014-01-01

    Many studies have examined whether communities are structured by random or deterministic processes, and both are likely to play a role, but relatively few studies have attempted to quantify the degree of randomness in species composition. We quantified, for the first time, the degree of randomness in forest bird communities based on an analysis of spatial autocorrelation in three regions of Germany. The compositional dissimilarity between pairs of forest patches was regressed against the distance between them. We then calculated the y-intercept of the curve, i.e. the 'nugget', which represents the compositional dissimilarity at zero spatial distance. We therefore assume, following similar work on plant communities, that this represents the degree of randomness in species composition. We then analysed how the degree of randomness in community composition varied over time and with forest management intensity, which we expected to reduce the importance of random processes by increasing the strength of environmental drivers. We found that a high portion of the bird community composition could be explained by chance (overall mean of 0.63), implying that most of the variation in local bird community composition is driven by stochastic processes. Forest management intensity did not consistently affect the mean degree of randomness in community composition, perhaps because the bird communities were relatively insensitive to management intensity. We found a high temporal variation in the degree of randomness, which may indicate temporal variation in assembly processes and in the importance of key environmental drivers. We conclude that the degree of randomness in community composition should be considered in bird community studies, and the high values we find may indicate that bird community composition is relatively hard to predict at the regional scale.

  4. EM Adaptive LASSO—A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes

    PubMed Central

    Mallick, Himel; Tiwari, Hemant K.

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice

  5. EM Adaptive LASSO-A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes.

    PubMed

    Mallick, Himel; Tiwari, Hemant K

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.

  6. A root-mean-square approach for predicting fatigue crack growth under random loading

    NASA Technical Reports Server (NTRS)

    Hudson, C. M.

    1981-01-01

    A method for predicting fatigue crack growth under random loading which employs the concept of Barsom (1976) is presented. In accordance with this method, the loading history for each specimen is analyzed to determine the root-mean-square maximum and minimum stresses, and the predictions are made by assuming the tests have been conducted under constant-amplitude loading at the root-mean-square maximum and minimum levels. The procedure requires a simple computer program and a desk-top computer. For the eleven predictions made, the ratios of the predicted lives to the test lives ranged from 2.13 to 0.82, which is a good result, considering that the normal scatter in the fatigue-crack-growth rates may range from a factor of two to four under identical loading conditions.

  7. Continuously-Variable Positive-Mesh Power Transmission

    NASA Technical Reports Server (NTRS)

    Johnson, J. L.

    1982-01-01

    Proposed transmission with continuously-variable speed ratio couples two mechanical trigonometric-function generators. Transmission is expected to handle higher loads than conventional variable-pulley drives; and, unlike variable pulley, positive traction through entire drive train with no reliance on friction to transmit power. Able to vary speed continuously through zero and into reverse. Possible applications in instrumentation where drive-train slippage cannot be tolerated.

  8. Zero-truncated negative binomial - Erlang distribution

    NASA Astrophysics Data System (ADS)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  9. Crash Frequency Analysis Using Hurdle Models with Random Effects Considering Short-Term Panel Data

    PubMed Central

    Chen, Feng; Ma, Xiaoxiang; Chen, Suren; Yang, Lin

    2016-01-01

    Random effect panel data hurdle models are established to research the daily crash frequency on a mountainous section of highway I-70 in Colorado. Road Weather Information System (RWIS) real-time traffic and weather and road surface conditions are merged into the models incorporating road characteristics. The random effect hurdle negative binomial (REHNB) model is developed to study the daily crash frequency along with three other competing models. The proposed model considers the serial correlation of observations, the unbalanced panel-data structure, and dominating zeroes. Based on several statistical tests, the REHNB model is identified as the most appropriate one among four candidate models for a typical mountainous highway. The results show that: (1) the presence of over-dispersion in the short-term crash frequency data is due to both excess zeros and unobserved heterogeneity in the crash data; and (2) the REHNB model is suitable for this type of data. Moreover, time-varying variables including weather conditions, road surface conditions and traffic conditions are found to play importation roles in crash frequency. Besides the methodological advancements, the proposed technology bears great potential for engineering applications to develop short-term crash frequency models by utilizing detailed data from field monitoring data such as RWIS, which is becoming more accessible around the world. PMID:27792209

  10. Staggered chiral random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, James C.

    2011-02-01

    We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.

  11. Hydroclimatic Controls on the Means and Variability of Vegetation Phenology and Carbon Uptake

    NASA Technical Reports Server (NTRS)

    Koster, Randal Dean; Walker, Gregory K.; Collatz, George J.; Thornton, Peter E.

    2013-01-01

    Long-term, global offline (land-only) simulations with a dynamic vegetation phenology model are used to examine the control of hydroclimate over vegetation-related quantities. First, with a control simulation, the model is shown to capture successfully (though with some bias) key observed relationships between hydroclimate and the spatial and temporal variations of phenological expression. In subsequent simulations, the model shows that: (i) the global spatial variation of seasonal phenological maxima is controlled mostly by hydroclimate, irrespective of distributions in vegetation type, (ii) the occurrence of high interannual moisture-related phenological variability in grassland areas is determined by hydroclimate rather than by the specific properties of grassland, and (iii) hydroclimatic means and variability have a corresponding impact on the spatial and temporal distributions of gross primary productivity (GPP).

  12. The Zero Program

    ERIC Educational Resources Information Center

    Roland, Erling; Midthassel, Unni Vere

    2012-01-01

    Zero is a schoolwide antibullying program developed by the Centre for Behavioural Research at the University of Stavanger, Norway. It is based on three main principles: a zero vision of bullying, collective commitment among all employees at the school using the program, and continuing work. Based on these principles, the program aims to reduce…

  13. Mean-Level Change and Intraindividual Variability in Self-Esteem and Depression among High-Risk Children

    ERIC Educational Resources Information Center

    Kim, Jungmeen; Cicchetti, Dante

    2009-01-01

    This study investigated mean-level changes and intraindividual variability of self-esteem among maltreated (N = 142) and nonmaltreated (N = 109) school-aged children from low-income families. Longitudinal factor analysis revealed higher temporal stability of self-esteem among maltreated children compared to nonmaltreated children. Cross-domain…

  14. Coupling the Gaussian Free Fields with Free and with Zero Boundary Conditions via Common Level Lines

    NASA Astrophysics Data System (ADS)

    Qian, Wei; Werner, Wendelin

    2018-06-01

    We point out a new simple way to couple the Gaussian Free Field (GFF) with free boundary conditions in a two-dimensional domain with the GFF with zero boundary conditions in the same domain: Starting from the latter, one just has to sample at random all the signs of the height gaps on its boundary-touching zero-level lines (these signs are alternating for the zero-boundary GFF) in order to obtain a free boundary GFF. Constructions and couplings of the free boundary GFF and its level lines via soups of reflected Brownian loops and their clusters are also discussed. Such considerations show for instance that in a domain with an axis of symmetry, if one looks at the overlay of a single usual Conformal Loop Ensemble CLE3 with its own symmetric image, one obtains the CLE4-type collection of level lines of a GFF with mixed zero/free boundary conditions in the half-domain.

  15. Instrumental variables and Mendelian randomization with invalid instruments

    NASA Astrophysics Data System (ADS)

    Kang, Hyunseung

    Instrumental variables (IV) methods have been widely used to determine the causal effect of a treatment, exposure, policy, or an intervention on an outcome of interest. The IV method relies on having a valid instrument, a variable that is (A1) associated with the exposure, (A2) has no direct effect on the outcome, and (A3) is unrelated to the unmeasured confounders associated with the exposure and the outcome. However, in practice, finding a valid instrument, especially those that satisfy (A2) and (A3), can be challenging. For example, in Mendelian randomization studies where genetic markers are used as instruments, complete knowledge about instruments' validity is equivalent to complete knowledge about the involved genes' functions. The dissertation explores the theory, methods, and application of IV methods when invalid instruments are present. First, when we have multiple candidate instruments, we establish a theoretical bound whereby causal effects are only identified as long as less than 50% of instruments are invalid, without knowing which of the instruments are invalid. We also propose a fast penalized method, called sisVIVE, to estimate the causal effect. We find that sisVIVE outperforms traditional IV methods when invalid instruments are present both in simulation studies as well as in real data analysis. Second, we propose a robust confidence interval under the multiple invalid IV setting. This work is an extension of our work on sisVIVE. However, unlike sisVIVE which is robust to violations of (A2) and (A3), our confidence interval procedure provides honest coverage even if all three assumptions, (A1)-(A3), are violated. Third, we study the single IV setting where the one IV we have may actually be invalid. We propose a nonparametric IV estimation method based on full matching, a technique popular in causal inference for observational data, that leverages observed covariates to make the instrument more valid. We propose an estimator along with

  16. The environmental zero-point problem in evolutionary reaction norm modeling.

    PubMed

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  17. Intrinsic Cellular Properties and Connectivity Density Determine Variable Clustering Patterns in Randomly Connected Inhibitory Neural Networks

    PubMed Central

    Rich, Scott; Booth, Victoria; Zochowski, Michal

    2016-01-01

    The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics

  18. Predicting the Effects of Longitudinal Variables on Cost and Schedule Performance

    DTIC Science & Technology

    2007-03-01

    budget so that as cost growth occurs, it can be absorbed (Moore, 2003:2). This number padding is very tempting since it relieves the program...presence of a value, zero was entered for the missing variables because without any value assigned, the analysis software would ignore all data for the...program in question, reducing the already small dataset. Second, if we considered the variable in isolation, we removed the zero and left the field

  19. Tolerating Zero Tolerance?

    ERIC Educational Resources Information Center

    Moore, Brian N.

    2010-01-01

    The concept of zero tolerance dates back to the mid-1990s when New Jersey was creating laws to address nuisance crimes in communities. The main goal of these neighborhood crime policies was to have zero tolerance for petty crime such as graffiti or littering so as to keep more serious crimes from occurring. Next came the war on drugs. In federal…

  20. Evaluation of Heart Rate Variability by means of Laser Doppler Vibrometry measurements

    NASA Astrophysics Data System (ADS)

    Cosoli, G.; Casacanditella, L.; Tomasini, EP; Scalise, L.

    2015-11-01

    Heart Rate Variability (HRV) analysis aims to study the physiological variability of the Heart Rate (HR), which is related to the health conditions of the subject. HRV is assessed measuring heart periods (HP) on a time window of >5 minutes (1)-(2). HPs are determined from signals of different nature: electrocardiogram (ECG), photoplethysmogram (PPG), phonocardiogram (PCG) or vibrocardiogram (VCG) (3)-(4)-(5). The fundamental aspect is the identification of a feature in each heartbeat that allows to accurately compute cardiac periods (such as R peaks in ECG), in order to make possible the measurement of all the typical HRV evaluations on those intervals. VCG is a non-contact technique (4), very favourable in medicine, which detects the vibrations on the skin surface (e.g. on the carotid artery) resulting from vascular blood motion consequent to electrical signal (ECG). In this paper, we propose the use of VCG for the measurement of a signal related to HRV and the use of a novel algorithm based on signal geometry (7) to detect signal peaks, in order to accurately determine cardiac periods and the Poincare plot (9)-(10). The results reported are comparable to the ones reached with the gold standard (ECG) and in literature (3)-(5). We report mean values of HP of 832±54 ms and 832±55 ms by means of ECG and VCG, respectively. Moreover, this algorithm allow us to identify particular features of ECG and VCG signals, so that in the future we will be able to evaluate specific correlations between the two.

  1. Comparison of Curvature Between the Zero-P Spacer and Traditional Cage and Plate After 3-Level Anterior Cervical Discectomy and Fusion: Mid-term Results.

    PubMed

    Chen, Yuanyuan; Liu, Yang; Chen, Huajiang; Cao, Peng; Yuan, Wen

    2017-10-01

    A retrospective study. To compare clinical and radiologic outcomes of 3-level anterior cervical discectomy and fusion between a zero-profile (Zero-P) spacer and a traditional plate in cases of symptomatic cervical spine spondylosis. Anterior cervical decompression and fusion is indicated for patients with anterior compression or stenosis of the spinal cord. The Zero-P spacers have been used for anterior cervical interbody fusion of 1 or 2 segments. However, there is a paucity of published clinical data regarding the exact impact of the device on cervical curvature of 3-level fixation. Clinical and radiologic data of 71 patients undergoing 3-level anterior cervical discectomy and fusion from January 2010 to January 2012 were collected. Zero-P spacer was implanted in 33 patients, and in 38 cases stabilization was accomplished using an anterior cervical plate and intervertebral cage. Patients were followed for a mean of 30.8 months (range, 24-36 mo) after surgery. Fusion rates, changes in cervical lordosis, and degeneration of adjacent segments were analyzed. Dysphagia was assessed using the Bazaz score, and clinical outcomes were analyzed using the Neck Disability Index and Japanese Orthopedic Association scoring system. Neurological outcomes did not differ significantly between groups. Significantly less dysphagia was seen at 2- and 6-month follow-up in patients with the Zero-P implant (P<0.05); however, there was significant less cervical lordosis and the lordosis across the fusion in patients with the Zero-P implant (both P<0.05). Degenerative changes in the adjacent segments occurred in 4 patients in the Zero-P group and 6 patients in the standard-plate group (P=0.742); however, no revision surgery was done. Clinical results for the Zero-P spacer were satisfactory. The device is superior to the traditional plate in preventing postoperative dysphagia; however, it is inferior at restoring cervical lordosis. It may not provide better sagittal cervical alignment

  2. Asymmetric transmission and optical low-pass filtering in a stack of random media with graded transport mean free path

    NASA Astrophysics Data System (ADS)

    Bingi, J.; Hemalatha, M.; Anita, R. W.; Vijayan, C.; Murukeshan, V. M.

    2015-11-01

    Light transport and the physical phenomena related to light propagation in random media are very intriguing, they also provide scope for new paradigms of device functionality, most of which remain unexplored. Here we demonstrate, experimentally and by simulation, a novel kind of asymmetric light transmission (diffusion) in a stack of random media (SRM) with graded transport mean free path. The structure is studied in terms of transmission, of photons propagated through and photons generated within the SRM. It is observed that the SRM exhibits asymmetric transmission property with a transmission contrast of 0.25. In addition, it is shown that the SRM works as a perfect optical low-pass filter with a well-defined cutoff wavelength at 580 nm. Further, the photons generated within the SRM found to exhibit functionality similar to an optical diode with a transmission contrast of 0.62. The basis of this functionality is explained in terms of wavelength dependent photon randomization and the graded transport mean free path of SRM.

  3. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  4. Beyond Zero Based Budgeting.

    ERIC Educational Resources Information Center

    Ogden, Daniel M., Jr.

    1978-01-01

    Suggests that the most practical budgeting system for most managers is a formalized combination of incremental and zero-based analysis because little can be learned about most programs from an annual zero-based budget. (Author/IRT)

  5. Pulse-excited, auto-zeroing multiple channel data transmission system

    NASA Astrophysics Data System (ADS)

    Fasching, G. E.

    1985-02-01

    A multiple channel data transmission system is provided in which signals from a plurality of pulse operated transducers and a corresponding plurality of pulse operated signal processor channels are multiplexed for single channel FM transmission to a receiving station. The transducers and corresponding channel amplifiers are powered by pulsing the dc battery power to these devices to conserve energy and battery size for long-term data transmission from remote or inaccessible locations. Auto zeroing of the signal channel amplifiers to compensate for drift associated with temperature changes, battery decay, component aging, etc., in each channel is accomplished by means of a unique auto zero feature which between signal pulses holds a zero correction voltage on an integrating capacitor coupled to the corresponding channel amplifier output. Pseudo-continuous outputs for each channel are achieved by pulsed sample-and-hold circuits which are updated at the pulsed operation rate. The sample-and-hold outputs are multiplexed into an FM/FM transmitter for transmission to an FM receiver station for demultiplexing and storage in separate channel recorders.

  6. Pulse-excited, auto-zeroing multiple channel data transmission system

    DOEpatents

    Fasching, G.E.

    1985-02-22

    A multiple channel data transmission system is provided in which signals from a plurality of pulse operated transducers and a corresponding plurality of pulse operated signal processor channels are multiplexed for single channel FM transmission to a receiving station. The transducers and corresponding channel amplifiers are powered by pulsing the dc battery power to these devices to conserve energy and battery size for long-term data transmission from remote or inaccessible locations. Auto zeroing of the signal channel amplifiers to compensate for drift associated with temperature changes, battery decay, component aging, etc., in each channel is accomplished by means of a unique auto zero feature which between signal pulses holds a zero correction voltage on an integrating capacitor coupled to the corresponding channel amplifier output. Pseudo-continuous outputs for each channel are achieved by pulsed sample-and-hold circuits which are updated at the pulsed operation rate. The sample-and-hold outputs are multiplexed into an FM/FM transmitter for transmission to an FM receiver station for demultiplexing and storage in separate channel recorders.

  7. Pulse-excited, auto-zeroing multiple channel data transmission system

    DOEpatents

    Fasching, George E.

    1987-01-01

    A multiple channel data transmission system is provided in which signals from a plurality of pulse operated transducers and a corresponding plurality of pulse operated signal processor channels are multiplexed for single channel FM transmission to a receiving station. The transducers and corresponding channel amplifiers are powered by pulsing the dc battery power to these devices to conserve energy and battery size for long-term data transmission from remote or inaccessible locations. Auto zeroing of the signal channel amplifiers to compensate for drift associated with temperature changes, battery decay, component aging, etc., in each channel is accomplished by means of a unique auto zero feature which between signal pulses holds a zero correction voltage on an integrating capacitor coupled to the corresponding channel amplifier output. Pseudo-continuous outputs for each channel are achieved by pulsed sample-and-hold circuits which are updated at the pulsed operation rate. The sample-and-hold outputs are multiplexed into an FM/FM transmitter for transmission to an FM receiver station for demultiplexing and storage in separate channel recorders.

  8. Zero-stress states of human pulmonary arteries and veins.

    PubMed

    Huang, W; Yen, R T

    1998-09-01

    The zero-stress states of the pulmonary arteries and veins from order 3 to order 9 were determined in six normal human lungs within 15 h postmortem. The zero-stress state of each vessel was obtained by cutting the vessel transversely into a series of short rings, then cutting each ring radially, which caused the ring to spring open into a sector. Each sector was characterized by its opening angle. The mean opening angle varied between 92 and 163 degrees in the arterial tree and between 89 and 128 degrees in the venous tree. There was a tendency for opening angles to increase as the sizes of the arteries and veins increased. We computed the residual strains based on the experimental measurements and estimated the residual stresses according to Hooke's law. We found that the inner wall of a vessel at the state in which the internal pressure, external pressure, and longitudinal stress are all zero was under compression and the outer wall was in tension, and that the magnitude of compressive stress was greater than the magnitude of tensile stress.

  9. Broken symmetries, zero-energy modes, and quantum transport in disordered graphene: from supermetallic to insulating regimes.

    PubMed

    Cresti, Alessandro; Ortmann, Frank; Louvet, Thibaud; Van Tuan, Dinh; Roche, Stephan

    2013-05-10

    The role of defect-induced zero-energy modes on charge transport in graphene is investigated using Kubo and Landauer transport calculations. By tuning the density of random distributions of monovacancies either equally populating the two sublattices or exclusively located on a single sublattice, all conduction regimes are covered from direct tunneling through evanescent modes to mesoscopic transport in bulk disordered graphene. Depending on the transport measurement geometry, defect density, and broken sublattice symmetry, the Dirac-point conductivity is either exceptionally robust against disorder (supermetallic state) or suppressed through a gap opening or by algebraic localization of zero-energy modes, whereas weak localization and the Anderson insulating regime are obtained for higher energies. These findings clarify the contribution of zero-energy modes to transport at the Dirac point, hitherto controversial.

  10. Axial Fatigue Tests at Zero Mean Stress of 24S-T Aluminum-alloy Sheet with and Without a Circular Hole

    NASA Technical Reports Server (NTRS)

    Brueggeman, W C; Mayer, M JR; Smith, W H

    1944-01-01

    Axial fatigue tests were made on 189 coupon specimens of 0.032-inch 24S-T aluminum-alloy sheet and a few supplementary specimens of 0.004-inch sheet. The mean load was zero. The specimens were restrained against lateral buckling by lubricated solid guides described in a previous report on this project. About two-thirds of the 0.032-inch specimens were plain coupons nominally free from stress raisers. The remainder contained a 0.1285-inch drilled hole at the center where the reduced section was 0.5 inch wide. S-N diagrams were obtained for cycles to failure between about 1000 and 10 to the 7th power cycles for the plain specimens and 17 and 10 to the 7th power cycles for the drilled specimens. The fatigue stress concentration factor increased from about 1.08 for a stress amplitude causing failure at 0.25 cycles (static) to a maximum of 1.83 at 15,000 cycles and then decreased gradually. The graph for the drilled specimens showed less scatter than that for the plain specimens.

  11. Multinomial model and zero-inflated gamma model to study time spent on leisure time physical activity: an example of ELSA-Brasil.

    PubMed

    Nobre, Aline Araújo; Carvalho, Marilia Sá; Griep, Rosane Härter; Fonseca, Maria de Jesus Mendes da; Melo, Enirtes Caetano Prates; Santos, Itamar de Souza; Chor, Dora

    2017-08-17

    To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil - Estudo Longitudinal de Saúde do Adulto ) have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week) spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income. The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.

  12. NASA Net Zero Energy Buildings Roadmap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pless, S.; Scheib, J.; Torcellini, P.

    In preparation for the time-phased net zero energy requirement for new federal buildings starting in 2020, set forth in Executive Order 13514, NASA requested that the National Renewable Energy Laboratory (NREL) to develop a roadmap for NASA's compliance. NASA detailed a Statement of Work that requested information on strategic, organizational, and tactical aspects of net zero energy buildings. In response, this document presents a high-level approach to net zero energy planning, design, construction, and operations, based on NREL's first-hand experience procuring net zero energy construction, and based on NREL and other industry research on net zero energy feasibility. The strategicmore » approach to net zero energy starts with an interpretation of the executive order language relating to net zero energy. Specifically, this roadmap defines a net zero energy acquisition process as one that sets an aggressive energy use intensity goal for the building in project planning, meets the reduced demand goal through energy efficiency strategies and technologies, then adds renewable energy in a prioritized manner, using building-associated, emission- free sources first, to offset the annual energy use required at the building; the net zero energy process extends through the life of the building, requiring a balance of energy use and production in each calendar year.« less

  13. An evaluation of a zero-heat-flux cutaneous thermometer in cardiac surgical patients.

    PubMed

    Eshraghi, Yashar; Nasr, Vivian; Parra-Sanchez, Ivan; Van Duren, Albert; Botham, Mark; Santoscoy, Thomas; Sessler, Daniel I

    2014-09-01

    Although core temperature can be measured invasively, there are currently no widely available, reliable, noninvasive thermometers for its measurement. We thus compared a prototype zero-heat-flux thermometer with simultaneous measurements from a pulmonary artery catheter. Specifically, we tested the hypothesis that zero-heat-flux temperatures are sufficiently accurate for routine clinical use. Core temperature was measured from the thermistor of a standard pulmonary artery catheter and with a prototype zero-heat-flux deep-tissue thermometer in 105 patients having nonemergent cardiac surgery. Zero-heat-flux probes were positioned on the lateral forehead and lateral neck. Skin surface temperature probes were attached to the forehead just adjacent to the zero-heat-flux probe. Temperatures were recorded at 1-minute intervals, excluding the period of cardiopulmonary bypass, and for the first 4 postoperative hours. Zero-heat-flux and pulmonary artery temperatures were compared with bias analysis; differences exceeding 0.5°C were considered to be potentially clinically important. The mean duration in the operating room was 279 ± 75 minutes, and the mean cross-clamp time was 118 ± 50 minutes. All subjects were monitored for an additional 4 hours in the intensive care unit. The average overall difference between forehead zero-heat-flux and pulmonary artery temperatures (i.e., forehead minus pulmonary artery) was -0.23°C (95% limits of agreement of ±0.82); 78% of the differences were ≤0.5°C. The average intraoperative temperature difference was -0.08°C (95% limits of agreement of ±0.88); 84% of the differences were ≤0.5°C. The average postoperative difference was -0.32°C (95% limits of agreement of ±0.75); 84% of the differences were ≤0.5°C. Bias and precision values for neck site were similar to the forehead values. Uncorrected forehead skin temperature showed an increasing negative bias as core temperature decreased. Core temperature can be noninvasively

  14. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2014-08-01

    Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it

  15. Comparing Zero Ischemia Laparoscopic Radio Frequency Ablation Assisted Tumor Enucleation and Laparoscopic Partial Nephrectomy for Clinical T1a Renal Tumor: A Randomized Clinical Trial.

    PubMed

    Huang, Jiwei; Zhang, Jin; Wang, Yanqing; Kong, Wen; Xue, Wei; Liu, Dongming; Chen, YongHui; Huang, Yiran

    2016-06-01

    We evaluated the functional outcome, safety and efficacy of zero ischemia laparoscopic radio frequency ablation assisted tumor enucleation compared with conventional laparoscopic partial nephrectomy. A prospective randomized controlled trial was conducted from April 2013 to March 2015 in patients with cT1a renal tumor scheduled for laparoscopic nephron sparing surgery. All patients were followed for at least 12 months. Patients in the laparoscopic radio frequency ablation assisted tumor enucleation group underwent tumor enucleation after radio frequency ablation without hilar clamping. The primary outcome was the change in glomerular filtration rate of the affected kidney by renal scintigraphy at 12 months. Secondary outcomes included changes in estimated glomerular filtration rate, estimated blood loss, operative time, hospital stay, postoperative complications and oncologic outcomes. The Pearson chi-square or Fisher exact, Student t-test and Wilcoxon rank sum tests were used. The trial ultimately enrolled 89 patients, of whom 44 were randomized to the laparoscopic radio frequency ablation assisted tumor enucleation group and 45 to the laparoscopic partial nephrectomy group. In the laparoscopic partial nephrectomy group 1 case was converted to radical nephrectomy. Compared with the laparoscopic partial nephrectomy group, patients in the laparoscopic radio frequency ablation assisted tumor enucleation group had a smaller decrease in glomerular filtration rate of the affected kidney at 3 months (10.2% vs 20.5%, p=0.001) and 12 months (7.6% vs 16.2%, p=0.002). Patients in the laparoscopic radio frequency ablation assisted tumor enucleation group had a shorter operative time (p=0.002), lower estimated blood loss (p <0.001) and a shorter hospital stay (p=0.029) but similar postoperative complications (p=1.000). There were no positive margins or local recurrence in this study. Zero ischemia laparoscopic radio frequency ablation assisted tumor enucleation enables tumor

  16. Role of the Indonesian Throughflow in controlling regional mean climate and rainfall variability

    NASA Astrophysics Data System (ADS)

    England, Matthew H.; Santoso, Agus; Phipps, Steven; Ummenhofer, Caroline

    2017-04-01

    The role of the Indonesian Throughflow (ITF) in controlling regional mean climate and rainfall is examined using a coupled ocean-atmosphere general circulation model. Experiments employing both a closed and open ITF are equilibrated to steady state and then 200 years of natural climatic variability is assessed within each model run, with a particular focus on the Indian Ocean region. Opening of the ITF results in a mean Pacific-to-Indian throughflow of 21 Sv (1 Sv = 106 m3 sec-1), which advects warm west Pacific waters into the east Indian Ocean. This warm signature is propagated westward by the mean ocean flow, however it never reaches the west Indian Ocean, as an ocean-atmosphere feedback in the tropics generates a weakened trade wind field that is reminiscent of the negative phase of the Indian Ocean Dipole (IOD). This is in marked contrast to the Indian Ocean response to an open ITF when examined in ocean-only model experiments; which sees a strengthening of both the Indian Ocean South Equatorial Current and the Agulhas Current. The coupled feedback in contrast leads to cooler conditions over the west Indian Ocean, and an anomalous zonal atmospheric pressure gradient that enhances the advection of warm moist air toward south Asia and Australia. This leaves the African continent significantly drier, and much of Australia and southern Asia significantly wetter, in response to the opening of the ITF. Given the substantial interannual variability that the ITF exhibits in the present-day climate system, and the restriction of the ITF gateway in past climate eras, this could have important implications for understanding past and present regional rainfall patterns around the Indian Ocean and over neighbouring land-masses.

  17. Meta-Analysis Comparing Zero-Profile Spacer and Anterior Plate in Anterior Cervical Fusion.

    PubMed

    Dong, Jun; Lu, Meng; Lu, Teng; Liang, Baobao; Xu, Junkui; Zhou, Jun; Lv, Hongjun; Qin, Jie; Cai, Xuan; Huang, Sihua; Li, Haopeng; Wang, Dong; He, Xijing

    2015-01-01

    Anterior plate fusion is an effective procedure for the treatment of cervical spinal diseases but is accompanied by a high incidence of postoperative dysphagia. A zero profile (Zero-P) spacer is increasingly being used to reduce postoperative dysphagia and other potential complications associated with surgical intervention. Studies comparing the Zero-P spacer and anterior plate have reported conflicting results. A meta-analysis was conducted to compare the safety, efficacy, radiological outcomes and complications associated with the use of a Zero-P spacer versus an anterior plate in anterior cervical spine fusion for the treatment of cervical spinal disease. We comprehensively searched PubMed, Embase, the Cochrane Library and other databases and performed a meta-analysis of all randomized controlled trials (RCTs) and prospective or retrospective comparative studies assessing the two techniques. Ten studies enrolling 719 cervical spondylosis patients were included. The pooled data showed significant differences in the operation time [SMD = -0.58 (95% CI = -0.77 to 0.40, p < 0.01)] and blood loss [SMD = -0.40, 95% CI (-0.59 to -0.21), p < 0.01] between the two groups. Compared to the anterior plate group, the Zero-P group exhibited a significantly improved JOA score and reduced NDI and VAS. However, anterior plate fusion had greater postoperative segmental and cervical Cobb's angles than the Zero-P group at the last follow-up. The fusion rate in the two groups was similar. More importantly, the Zero-P group had a lower incidence of earlier and later postoperative dysphagia. Compared to anterior plate fusion, Zero-P is a safer and effective procedure, with a similar fusion rate and lower incidence of earlier and later postoperative dysphagia. However, the results of this meta-analysis should be accepted with caution due to the limitations of the study. Further evaluation and large-sample RCTs are required to confirm and update the results of this study.

  18. Spin zero Hawking radiation for non-zero-angular momentum mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngampitipan, Tritos; Bonserm, Petarpa; Visser, Matt

    2015-05-15

    Black hole greybody factors carry some quantum black hole information. Studying greybody factors may lead to understanding the quantum nature of black holes. However, solving for exact greybody factors in many black hole systems is impossible. One way to deal with this problem is to place some rigorous analytic bounds on the greybody factors. In this paper, we calculate rigorous bounds on the greybody factors for spin zero hawking radiation for non-zero-angular momentum mode from the Kerr-Newman black holes.

  19. The mean field theory in EM procedures for blind Markov random field image restoration.

    PubMed

    Zhang, J

    1993-01-01

    A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.

  20. Generated effect modifiers (GEM’s) in randomized clinical trials

    PubMed Central

    Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R. Todd

    2017-01-01

    In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an “effect modifier”. Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. PMID:27465235

  1. Precision zero-home locator

    DOEpatents

    Stone, William J.

    1986-01-01

    A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.

  2. Precision zero-home locator

    DOEpatents

    Stone, W.J.

    1983-10-31

    A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.

  3. Habit Reversal versus Object Manipulation Training for Treating Nail Biting: A Randomized Controlled Clinical Trial

    PubMed Central

    Ghanizadeh, Ahmad; Bazrafshan, Amir; Dehbozorgi, Gholamreza

    2013-01-01

    Objective This is a parallel, three group, randomized, controlled clinical trial, with outcomes evaluated up to three months after randomization for children and adolescents with chronic nail biting. The current study investigates the efficacy of habit reversal training (HRT) and compares its effect with object manipulation training (OMT) considering the limitations of the current literature. Method Ninety one children and adolescents with nail biting were randomly allocated to one of the three groups. The three groups were HRT (n = 30), OMT (n = 30), and wait-list or control group (n = 31). The mean length of nail was considered as the main outcome. Results The mean length of the nails after one month in HRT and OMT groups increased compared to the waiting list group (P < 0.001, P < 0.001, respectively). In long term, both OMT and HRT increased the mean length of nails (P < 0.01), but HRT was more effective than OMT (P < 0.021). The parent-reported frequency of nail biting did show similar results as to the mean length of nails assessment in long term. The number of children who completely stopped nail biting in HRT and OMT groups during three months was 8 and 7, respectively. This number was zero during one month for the wait-list group. Conclusion This trial showed that HRT is more effective than wait-list and OMT in increasing the mean length of nails of children and adolescents in long terms. PMID:24130603

  4. New Quasar Surveys with WIRO: Data and Calibration for Studies of Variability

    NASA Astrophysics Data System (ADS)

    Lyke, Bradley; Bassett, Neil; Deam, Sophie; Dixon, Don; Griffith, Emily; Harvey, William; Lee, Daniel; Haze Nunez, Evan; Parziale, Ryan; Witherspoon, Catherine; Myers, Adam D.; Findlay, Joseph; Kobulnicky, Henry A.; Dale, Daniel A.

    2017-01-01

    Measurements of quasar variability offer the potential for understanding the physics of accretion processes around supermassive black holes. However, generating structure functions in order to characterize quasar variability can be observationally taxing as it requires imaging of quasars over a large variety of date ranges. To begin to address this problem, we have conducted an imaging survey of sections of Sloan Digital Sky Survey (SDSS) Stripe 82 at the Wyoming Infrared Observatory (WIRO). We used standard stars to calculate zero-point offsets between WIRO and SDSS observations in the urgiz magnitude system. After finding the zero-point offset, we accounted for further offsets by comparing standard star magnitudes in each WIRO frame to coadded magnitudes from Stripe 82 and applying a linear correction. Known (i.e. spectroscopically confirmed) quasars at the epoch we conducted WIRO observations (Summer, 2016) and at every epoch in SDSS Stripe 82 (~80 total dates) were hence calibrated to a similar magnitude system. The algorithm for this calibration compared 1500 randomly selected standard stars with an MJD within 0.07 of the MJD of each quasar of interest, for each of the five ugriz filters. Ultimately ~1000 known quasars in Stripe 82 were identified by WIRO and their SDSS-WIRO magnitudes were calibrated to a similar scale in order to generate ensemble structure functions.This work is supported by the National Science Foundation under REU grant AST 1560461.

  5. Vibrational zero point energy for H-doped silicon

    NASA Astrophysics Data System (ADS)

    Karazhanov, S. Zh.; Ganchenkova, M.; Marstein, E. S.

    2014-05-01

    Most of the studies addressed to computations of hydrogen parameters in semiconductor systems, such as silicon, are performed at zero temperature T = 0 K and do not account for contribution of vibrational zero point energy (ZPE). For light weight atoms such as hydrogen (H), however, magnitude of this parameter might be not negligible. This Letter is devoted to clarify the importance of accounting the zero-point vibrations when analyzing hydrogen behavior in silicon and its effect on silicon electronic properties. For this, we estimate the ZPE for different locations and charge states of H in Si. We show that the main contribution to the ZPE is coming from vibrations along the Si-H bonds whereas contributions from other Si atoms apart from the direct Si-H bonds play no role. It is demonstrated that accounting the ZPE reduces the hydrogen formation energy by ˜0.17 eV meaning that neglecting ZPE at low temperatures one can underestimate hydrogen solubility by few orders of magnitude. In contrast, the effect of the ZPE on the ionization energy of H in Si is negligible. The results can have important implications for characterization of vibrational properties of Si by inelastic neutron scattering, as well as for theoretical estimations of H concentration in Si.

  6. Errors of five-day mean surface wind and temperature conditions due to inadequate sampling

    NASA Technical Reports Server (NTRS)

    Legler, David M.

    1991-01-01

    Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.

  7. Zero-ischaemia robotic partial nephrectomy (RPN) for hilar tumours.

    PubMed

    Abreu, André L C; Gill, Inderbir S; Desai, Mihir M

    2011-09-01

    • Robotic partial nephrectomy (RPN) has emerged as an attractive minimally invasive nephron-sparing surgical option. However, on-going concerns about RPN include: (i) prolonged ischaemia time with potential implications on renal functional outcomes, and (ii) questions about the ability of RPN to address technically challenging hilar tumours. • Herein, we detail the technique and present initial perioperative outcomes of our novel technique of zero-ischaemia RPN for complex hilar tumours. • Since May 2010, >100 patients underwent minimally invasive zero-ischaemia PN. Of these, 21 had procedure done robotically. Of these, seven patients had hilar tumours. RPN was offered to all patients irrespective of tumour or reno-vascular anatomy, contralateral kidney characteristics or renal function. • Data were prospectively collected and recorded in an Institutional Review Board-approved database. • We detail our zero-ischaemia RPN technique and present early perioperative outcomes. • Zero-ischaemia RPN was successful in all cases without any hilar clamping. The median (range) tumour size was 4.1 (2.6-6.4) cm and the median RENAL score was 10 (8-10). • The warm ischaemia time was zero in all cases. • The median (range) operative time was 222 (150-330) min, estimated blood loss was 150 (100-500) mL, and the percentage kidney spared was 75 (50-90)%. The median hospital stay was 4 (3-6) days. • There were no intraoperative complications; two patients had postoperative complications (Clavien grade I and II). No patient had a postoperative haemorrhage, urological/renal complication or lost a kidney. All tumour specimens had negative surgical margins on pathology. • The median absolute decrease in serum creatinine and estimated glomerular filtration rate at discharge was 0 (0.2-0.7) mg/dL (P = 0.4) and 5 (-16 to 29) mL/min per 1.73 m(2) (P = 0.8), respectively. • Zero-ischaemia RPN for hilar tumours is safe and feasible and to our knowledge the first report in

  8. Random attractor of non-autonomous stochastic Boussinesq lattice system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  9. Validation of zero-order feedback strategies for medium range air-to-air interception in a horizontal plane

    NASA Technical Reports Server (NTRS)

    Shinar, J.

    1982-01-01

    A zero order feedback solution of a variable speed interception game between two aircraft in the horizontal plane, obtained by using the method of forced singular perturbation (FSP), is compared with the exact open loop solution. The comparison indicates that for initial distances of separation larger than eight turning radii of the evader, the accuracy of the feedback approximation is better than one percent. The result validates the zero order FSP approximation for medium range air combat analysis.

  10. Estimation of population mean in the presence of measurement error and non response under stratified random sampling

    PubMed Central

    Shabbir, Javid

    2018-01-01

    In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519

  11. Mean Levels and Variability in Affect, Diabetes Self-Care Behaviors, and Continuously Monitored Glucose: A Daily Study of Latinos With Type 2 Diabetes.

    PubMed

    Wagner, Julie; Armeli, Stephen; Tennen, Howard; Bermudez-Millan, Angela; Wolpert, Howard; Pérez-Escamilla, Rafael

    2017-09-01

    This study investigated between- and within-person associations among mean levels and variability in affect, diabetes self-care behaviors, and continuously monitored glucose in Latinos with type 2 diabetes. Fifty participants (M [SD] age = 57.8 [11.7] years, 74% women, mean [SD] glycosylated hemoglobin A1c = 8.3% [1.5%]) wore a "blinded" continuous glucose monitor for 7 days, and they responded to twice daily automated phone surveys regarding positive affect, negative affect, and self-care behaviors. Higher mean levels of NA were associated with higher mean glucose (r = .30), greater percent hyperglycemia (r = .34) and greater percentage of out-of-range glucose (r = .34). Higher NA variability was also related to higher mean glucose (r = .34), greater percent of hyperglycemia (r = .44) and greater percentage of out-of-range glucose (r = .43). Higher positive affect variability was related to lower percentage of hypoglycemia (r = -.33). Higher mean levels of self-care behaviors were related to lower glucose variability (r = -.35). Finally, higher self-care behavior variability was related to greater percentage of hyperglycemia (r = .31) and greater percentage of out-of-range glucose (r = -.28). In multilevel regression models, within-person increases from mean levels of self-care were associated with lower mean levels of glucose (b = -7.4, 95% confidence interval [CI] = -12.8 to -1.9), lower percentage of hyperglycemia (b = -0.04, 95% CI = -0.07 to -0.01), and higher percentage of hypoglycemia (b = 0.02, 95% CI = 0.01 to 0.03) in the subsequent 10-hour period. Near-to-real time sampling documented associations of glucose with affect and diabetes self-care that are not detectable with traditional measures.

  12. The GISS global climate-middle atmosphere model. II - Model variability due to interactions between planetary waves, the mean circulation and gravity wave drag

    NASA Technical Reports Server (NTRS)

    Rind, D.; Suozzo, R.; Balachandran, N. K.

    1988-01-01

    The variability which arises in the GISS Global Climate-Middle Atmosphere Model on two time scales is reviewed: interannual standard deviations, derived from the five-year control run, and intraseasonal variability as exemplified by statospheric warnings. The model's extratropical variability for both mean fields and eddy statistics appears reasonable when compared with observations, while the tropical wind variability near the stratopause may be excessive possibly, due to inertial oscillations. Both wave 1 and wave 2 warmings develop, with connections to tropospheric forcing. Variability on both time scales results from a complex set of interactions among planetary waves, the mean circulation, and gravity wave drag. Specific examples of these interactions are presented, which imply that variability in gravity wave forcing and drag may be an important component of the variability of the middle atmosphere.

  13. Zero Energy Use School.

    ERIC Educational Resources Information Center

    Nelson, Brian, Ed.; And Others

    The economic and physical realities of an energy shortage have caused many educators to consider alternative sources of energy when constructing their schools. This book contains studies and designs by fifth-year architecture students concerning the proposed construction of a zero energy-use elementary school in Albany, Oregon. "Zero energy…

  14. Relative contributions of mean-state shifts and ENSO-driven variability to precipitation changes in a warming climate

    DOE PAGES

    Bonfils, Celine J. W.; Santer, Benjamin D.; Phillips, Thomas J.; ...

    2015-12-18

    The El Niño–Southern Oscillation (ENSO) is an important driver of regional hydroclimate variability through far-reaching teleconnections. This study uses simulations performed with coupled general circulation models (CGCMs) to investigate how regional precipitation in the twenty-first century may be affected by changes in both ENSO-driven precipitation variability and slowly evolving mean rainfall. First, a dominant, time-invariant pattern of canonical ENSO variability (cENSO) is identified in observed SST data. Next, the fidelity with which 33 state-of-the-art CGCMs represent the spatial structure and temporal variability of this pattern (as well as its associated precipitation responses) is evaluated in simulations of twentieth-century climate change.more » Possible changes in both the temporal variability of this pattern and its associated precipitation teleconnections are investigated in twenty-first-century climate projections. Models with better representation of the observed structure of the cENSO pattern produce winter rainfall teleconnection patterns that are in better accord with twentieth-century observations and more stationary during the twenty-first century. Finally, the model-predicted twenty-first-century rainfall response to cENSO is decomposed into the sum of three terms: 1) the twenty-first-century change in the mean state of precipitation, 2) the historical precipitation response to the cENSO pattern, and 3) a future enhancement in the rainfall response to cENSO, which amplifies rainfall extremes. Lastly, by examining the three terms jointly, this conceptual framework allows the identification of regions likely to experience future rainfall anomalies that are without precedent in the current climate.« less

  15. Relative Contributions of Mean-State Shifts and ENSO-Driven Variability to Precipitation Changes in a Warming Climate

    NASA Technical Reports Server (NTRS)

    Bonfils, Celine J. W.; Santer, Benjamin D.; Phillips, Thomas J.; Marvel, Kate; Leung, L. Ruby; Doutriaux, Charles; Capotondi, Antonietta

    2015-01-01

    El Niño-Southern Oscillation (ENSO) is an important driver of regional hydroclimate variability through far-reaching teleconnections. This study uses simulations performed with coupled general circulation models (CGCMs) to investigate how regional precipitation in the twenty-first century may be affected by changes in both ENSO-driven precipitation variability and slowly evolving mean rainfall. First, a dominant, time-invariant pattern of canonical ENSO variability (cENSO) is identified in observed SST data. Next, the fidelity with which 33 state-of-the-art CGCMs represent the spatial structure and temporal variability of this pattern (as well as its associated precipitation responses) is evaluated in simulations of twentieth-century climate change. Possible changes in both the temporal variability of this pattern and its associated precipitation teleconnections are investigated in twenty-first-century climate projections. Models with better representation of the observed structure of the cENSO pattern produce winter rainfall teleconnection patterns that are in better accord with twentieth-century observations and more stationary during the twenty-first century. Finally, the model-predicted twenty-first-century rainfall response to cENSO is decomposed into the sum of three terms: 1) the twenty-first-century change in the mean state of precipitation, 2) the historical precipitation response to the cENSO pattern, and 3) a future enhancement in the rainfall response to cENSO, which amplifies rainfall extremes. By examining the three terms jointly, this conceptual framework allows the identification of regions likely to experience future rainfall anomalies that are without precedent in the current climate.

  16. Relative Contributions of Mean-State Shifts and ENSO-Driven Variability to Precipitation Changes in a Warming Climate

    NASA Technical Reports Server (NTRS)

    Bonfils, Celine J. W.; Santer, Benjamin D.; Phillips, Thomas J.; Marvel, Kate; Leung, L. Ruby; Doutriaux, Charles; Capotondi, Antonietta

    2015-01-01

    The El Nino-Southern Oscillation (ENSO) is an important driver of regional hydroclimate variability through far-reaching teleconnections. This study uses simulations performed with Coupled General Circulation Models (CGCMs) to investigate how regional precipitation in the 21st century may be affected by changes in both ENSO-driven precipitation variability and slowly-evolving mean rainfall. First, a dominant, time-invariant pattern of canonical ENSO variability (cENSO) is identified in observed SST data. Next, the fidelity with which 33 state-of-the-art CGCMs represent the spatial structure and temporal variability of this pattern (as well as its associated precipitation responses) is evaluated in simulations of 20th century climate change. Possible changes in both the temporal variability of this pattern and its associated precipitation teleconnections are investigated in 21st century climate projections. Models with better representation of the observed structure of the cENSO pattern produce winter rainfall teleconnection patterns that are in better accord with 20th century observations and more stationary during the 21st century. Finally, the model-predicted 21st century rainfall response to cENSO is decomposed into the sum of three terms: 1) the 21st century change in the mean state of precipitation; 2) the historical precipitation response to the cENSO pattern; and 3) a future enhancement in the rainfall response to cENSO, which amplifies rainfall extremes. By examining the three terms jointly, this conceptual framework allows the identification of regions likely to experience future rainfall anomalies that are without precedent in the current climate.

  17. Beyond the mean: the role of variability in predicting ecological effects of stream temperature on salmon

    Treesearch

    E. Ashley Steel; Abby Tillotson; Donald A. Larson; Aimee H. Fullerton; Keith P. Denton; Brian R. Beckman

    2012-01-01

    Alterations in variance of riverine thermal regimes have been observed and are predicted with climate change and human development. We tested whether changes in daily or seasonal thermal variability, aside from changes in mean temperature, could have biological consequences by exposing Chinook salmon (Oncorhynchus tshawytscha) eggs to eight...

  18. Soil variability in engineering applications

    NASA Astrophysics Data System (ADS)

    Vessia, Giovanna

    2014-05-01

    Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random

  19. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: An instrumental variables re-analysis of randomized clinical trials

    PubMed Central

    Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.

    2014-01-01

    Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504

  20. A zero-augmented generalized gamma regression calibration to adjust for covariate measurement error: A case of an episodically consumed dietary intake

    PubMed Central

    Agogo, George O.

    2017-01-01

    Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599

  1. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Singularity perturbed zero dynamics of nonlinear systems

    NASA Technical Reports Server (NTRS)

    Isidori, A.; Sastry, S. S.; Kokotovic, P. V.; Byrnes, C. I.

    1992-01-01

    Stability properties of zero dynamics are among the crucial input-output properties of both linear and nonlinear systems. Unstable, or 'nonminimum phase', zero dynamics are a major obstacle to input-output linearization and high-gain designs. An analysis of the effects of regular perturbations in system equations on zero dynamics shows that whenever a perturbation decreases the system's relative degree, it manifests itself as a singular perturbation of zero dynamics. Conditions are given under which the zero dynamics evolve in two timescales characteristic of a standard singular perturbation form that allows a separate analysis of slow and fast parts of the zero dynamics.

  3. BOILING HEAT TRANSFER IN ZERO GRAVITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zara, E.A.

    1964-01-01

    The preliminary results of a research program to determine the effects of zero and near zero gravity on boiling heat transfer are presented. Zero gravity conditions were obtained on the ASD KC-135 zero gravity test aircraft, capable of providing 30-seconds of zero gravity. Results of the program to date indicate that nucleate (bubble) boiling heat transfer rates are not greatly affected by the absence of gravity forces. However, radical pressure increases were observed that will dictate special design considerations to space vehicle systems utilizing pool boiling processes, such as cryogenic or other fluid storage vessels where thermal input to themore » fluid is used for vessel pressurization. (auth)« less

  4. Rates of profit as correlated sums of random variables

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2013-10-01

    Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.

  5. Zero point and zero suffix methods with robust ranking for solving fully fuzzy transportation problems

    NASA Astrophysics Data System (ADS)

    Ngastiti, P. T. B.; Surarso, Bayu; Sutimin

    2018-05-01

    Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.

  6. The venetian-blind effect: a preference for zero disparity or zero slant?

    PubMed Central

    Vlaskamp, Björn N. S.; Guan, Phillip; Banks, Martin S.

    2013-01-01

    When periodic stimuli such as vertical sinewave gratings are presented to the two eyes, the initial stage of disparity estimation yields multiple solutions at multiple depths. The solutions are all frontoparallel when the sinewaves have the same spatial frequency; they are all slanted when the sinewaves have quite different frequencies. Despite multiple solutions, humans perceive only one depth in each visual direction: a single frontoparallel plane when the frequencies are the same and a series of small slanted planes—Venetian blinds—when the frequencies are quite different. These percepts are consistent with a preference for solutions that minimize absolute disparity or overall slant. The preference for minimum disparity and minimum slant are identical for gaze at zero eccentricity; we dissociated the predictions of the two by measuring the occurrence of Venetian blinds when the stimuli were viewed in eccentric gaze. The results were generally quite consistent with a zero-disparity preference (Experiment 1), but we also observed a shift toward a zero-slant preference when the edges of the stimulus had zero slant (Experiment 2). These observations provide useful insights into how the visual system constructs depth percepts from a multitude of possible depths. PMID:24273523

  7. The venetian-blind effect: a preference for zero disparity or zero slant?

    PubMed

    Vlaskamp, Björn N S; Guan, Phillip; Banks, Martin S

    2013-01-01

    When periodic stimuli such as vertical sinewave gratings are presented to the two eyes, the initial stage of disparity estimation yields multiple solutions at multiple depths. The solutions are all frontoparallel when the sinewaves have the same spatial frequency; they are all slanted when the sinewaves have quite different frequencies. Despite multiple solutions, humans perceive only one depth in each visual direction: a single frontoparallel plane when the frequencies are the same and a series of small slanted planes-Venetian blinds-when the frequencies are quite different. These percepts are consistent with a preference for solutions that minimize absolute disparity or overall slant. The preference for minimum disparity and minimum slant are identical for gaze at zero eccentricity; we dissociated the predictions of the two by measuring the occurrence of Venetian blinds when the stimuli were viewed in eccentric gaze. The results were generally quite consistent with a zero-disparity preference (Experiment 1), but we also observed a shift toward a zero-slant preference when the edges of the stimulus had zero slant (Experiment 2). These observations provide useful insights into how the visual system constructs depth percepts from a multitude of possible depths.

  8. Highly variable sperm precedence in the stalk-eyed fly, Teleopsis dalmanni

    PubMed Central

    Corley, Laura S; Cotton, Samuel; McConnell, Ellen; Chapman, Tracey; Fowler, Kevin; Pomiankowski, Andrew

    2006-01-01

    Background When females mate with different males, competition for fertilizations occurs after insemination. Such sperm competition is usually summarized at the level of the population or species by the parameter, P2, defined as the proportion of offspring sired by the second male in double mating trials. However, considerable variation in P2 may occur within populations, and such variation limits the utility of population-wide or species P2 estimates as descriptors of sperm usage. To fully understand the causes and consequences of sperm competition requires estimates of not only mean P2, but also intra-specific variation in P2. Here we investigate within-population quantitative variation in P2 using a controlled mating experiment and microsatellite profiling of progeny in the multiply mating stalk-eyed fly, Teleopsis dalmanni. Results We genotyped 381 offspring from 22 dam-sire pair families at four microsatellite loci. The mean population-wide P2 value of 0.40 was not significantly different from that expected under random sperm mixing (i.e. P2 = 0.5). However, patterns of paternity were highly variable between individual families; almost half of families displayed extreme second male biases resulting in zero or complete paternity, whereas only about one third of families had P2 values of 0.5, the remainder had significant, but moderate, paternity skew. Conclusion Our data suggest that all modes of ejaculate competition, from extreme sperm precedence to complete sperm mixing, occur in T. dalmanni. Thus the population mean P2 value does not reflect the high underlying variance in familial P2. We discuss some of the potential causes and consequences of post-copulatory sexual selection in this important model species. PMID:16800877

  9. Observability-based Local Path Planning and Collision Avoidance Using Bearing-only Measurements

    DTIC Science & Technology

    2012-01-20

    Clark N. Taylorb aDepartment of Electrical and Computer Engineering, Brigham Young University , Provo, Utah, 84602 bSensors Directorate, Air Force Research...NAME(S) AND ADDRESS(ES) Brigham Young University ,Department of Electrical and Computer Engineering,Provo,UT,84602 8. PERFORMING ORGANIZATION... vit is the measurement noise that is assumed to be a zero-mean Gaus- sian random variable. Based on the state transition model expressed by Eqs. (1

  10. Impact of Hydrologic and Micro-topographic Variabilities on Spatial Distribution of Mean Soil-Nitrogen Age

    NASA Astrophysics Data System (ADS)

    Woo, D.; Kumar, P.

    2015-12-01

    Excess reactive nitrogen in soils of intensively managed agricultural fields causes adverse environmental impact, and continues to remain a global concern. Many novel strategies have been developed to provide better management practices and, yet, the problem remains unresolved. The objective of this study is to develop a 3-dimensional model to characterize the spatially distributed ``age" of soil-nitrogen (nitrate and ammonia-ammonium) across a watershed. We use the general theory of age, which provides an assessment of the elapsed time since nitrogen is introduced into the soil system. Micro-topographic variability incorporates heterogeneity of nutrient transformations and transport associated with topographic depressions that form temporary ponds and produce prolonged periods of anoxic conditions, and roadside agricultural ditches that support rapid surface movement. This modeling effort utilizes 1-m Light Detection and Ranging (LiDAR) data. We find a significant correlation between hydrologic variability and mean nitrate age that enables assessment of preferential flow paths of nitrate leaching. The estimation of the mean nitrogen age can thus serve as a tool to disentangle complex nitrogen dynamics by providing the analysis of the time scales of soil-nitrogen transformation and transport processes without introducing additional parameters.

  11. Eliminating the zero spectrum in Fourier transform profilometry using empirical mode decomposition.

    PubMed

    Li, Sikun; Su, Xianyu; Chen, Wenjing; Xiang, Liqun

    2009-05-01

    Empirical mode decomposition is introduced into Fourier transform profilometry to extract the zero spectrum included in the deformed fringe pattern without the need for capturing two fringe patterns with pi phase difference. The fringe pattern is subsequently demodulated using a standard Fourier transform profilometry algorithm. With this method, the deformed fringe pattern is adaptively decomposed into a finite number of intrinsic mode functions that vary from high frequency to low frequency by means of an algorithm referred to as a sifting process. Then the zero spectrum is separated from the high-frequency components effectively. Experiments validate the feasibility of this method.

  12. Lattice gauge action suppressing near-zero modes of H{sub W}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukaya, Hidenori; Hashimoto, Shoji; Kaneko, Takashi

    2006-11-01

    We propose a lattice action including unphysical Wilson fermions with a negative mass m{sub 0} of the order of the inverse lattice spacing. With this action, the exact zero mode of the Hermitian Wilson-Dirac operator H{sub W}(m{sub 0}) cannot appear and near-zero modes are strongly suppressed. By measuring the spectral density {rho}({lambda}{sub W}), we find a gap near {lambda}{sub W}=0 on the configurations generated with the standard and improved gauge actions. This gap provides a necessary condition for the proof of the exponential locality of the overlap-Dirac operator by Hernandez, Jansen, and Luescher. Since the number of near-zero modes ismore » small, the numerical cost to calculate the matrix sign function of H{sub W}(m{sub 0}) is significantly reduced, and the simulation including dynamical overlap fermions becomes feasible. We also introduce a pair of twisted mass pseudofermions to cancel the unwanted higher mode effects of the Wilson fermions. The gauge coupling renormalization due to the additional fields is then minimized. The topological charge measured through the index of the overlap-Dirac operator is conserved during continuous evolutions of gauge field variables.« less

  13. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  14. Flare cue symbology and EVS for zero-zero weather landing

    NASA Astrophysics Data System (ADS)

    French, Guy A.; Murphy, David M.; Ercoline, William R.

    2006-05-01

    When flying an airplane, landing is arguably the most difficult task a pilot can do. This applies to pilots of all skill levels particularly as the level of complexity in both the aircraft and environment increase. Current navigational aids, such as an instrument landing system (ILS), do a good job of providing safe guidance for an approach to an airfield. These aids provide data to primary flight reference (PFR) displays on-board the aircraft depicting through symbology what the pilot's eyes should be seeing. Piloting an approach under visual meteorological conditions (VMC) is relatively easy compared to the various complex instrument approaches under instrument meteorological conditions (IMC) which may include flying in zero-zero weather. Perhaps the most critical point in the approach is the transition to landing where the rate of closure between the wheels and the runway is critical to a smooth, accurate landing. Very few PFR's provide this flare cue information. In this study we will evaluate examples of flare cueing symbology for use in landing an aircraft in the most difficult conditions. This research is a part of a larger demonstration effort using sensor technology to land in zero-zero weather at airfields that offer no or unreliable approach guidance. Several problems exist when landing without visual reference to the outside world. One is landing with a force greater than desired at touchdown and another is landing on a point of the runway other than desired. We compare different flare cueing systems to one another and against a baseline for completing this complex approach task.

  15. Spatio-temporal variability of soil water content on the local scale in a Mediterranean mountain area (Vallcebre, North Eastern Spain). How different spatio-temporal scales reflect mean soil water content

    NASA Astrophysics Data System (ADS)

    Molina, Antonio J.; Latron, Jérôme; Rubio, Carles M.; Gallart, Francesc; Llorens, Pilar

    2014-08-01

    As a result of complex human-land interactions and topographic variability, many Mediterranean mountain catchments are covered by agricultural terraces that have locally modified the soil water content dynamic. Understanding these local-scale dynamics helps us grasp better how hydrology behaves on the catchment scale. Thus, this study examined soil water content variability in the upper 30 cm of the soil on a Mediterranean abandoned terrace in north-east Spain. Using a dataset of high spatial (regular grid of 128 automatic TDR probes at 2.5 m intervals) and temporal (20-min time step) resolution, gathered throughout a 84-day period, the spatio-temporal variability of soil water content at the local scale and the way that different spatio-temporal scales reflect the mean soil water content were investigated. Soil water content spatial variability and its relation to wetness conditions were examined, along with the spatial structuring of the soil water content within the terrace. Then, the ability of single probes and of different combinations of spatial measurements (transects and grids) to provide a good estimate of mean soil water content on the terrace scale was explored by means of temporal stability analyses. Finally, the effect of monitoring frequency on the magnitude of detectable daily soil water content variations was studied. Results showed that soil water content spatial variability followed a bimodal pattern of increasing absolute variability with increasing soil water content. In addition, a linear trend of decreasing soil water content as the distance from the inner part of the terrace increased was identified. Once this trend was subtracted, resulting semi-variograms suggested that the spatial resolution examined was too high to appreciate spatial structuring in the data. Thus, the spatial pattern should be considered as random. Of all the spatial designs tested, the 10 × 10 m mesh grid (9 probes) was considered the most suitable option for a good

  16. Zero-G Workstation Design

    NASA Technical Reports Server (NTRS)

    Gundersen, R. T.; Bond, R. L.

    1976-01-01

    Zero-g workstations were designed throughout manned spaceflight, based on different criteria and requirements for different programs. The history of design of these workstations is presented along with a thorough evaluation of selected Skylab workstations (the best zero-g experience available on the subject). The results were applied to on-going and future programs, with special emphasis on the correlation of neutral body posture in zero-g to workstation design. Where selected samples of shuttle orbiter workstations are shown as currently designed and compared to experience gained during prior programs in terms of man machine interface design, the evaluations were done in a generic sense to show the methods of applying evaluative techniques.

  17. 38 CFR 4.31 - Zero percent evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Zero percent evaluations... FOR RATING DISABILITIES General Policy in Rating § 4.31 Zero percent evaluations. In every instance where the schedule does not provide a zero percent evaluation for a diagnostic code, a zero percent...

  18. 38 CFR 4.31 - Zero percent evaluations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Zero percent evaluations... FOR RATING DISABILITIES General Policy in Rating § 4.31 Zero percent evaluations. In every instance where the schedule does not provide a zero percent evaluation for a diagnostic code, a zero percent...

  19. 38 CFR 4.31 - Zero percent evaluations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Zero percent evaluations... FOR RATING DISABILITIES General Policy in Rating § 4.31 Zero percent evaluations. In every instance where the schedule does not provide a zero percent evaluation for a diagnostic code, a zero percent...

  20. 38 CFR 4.31 - Zero percent evaluations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Zero percent evaluations... FOR RATING DISABILITIES General Policy in Rating § 4.31 Zero percent evaluations. In every instance where the schedule does not provide a zero percent evaluation for a diagnostic code, a zero percent...

  1. 38 CFR 4.31 - Zero percent evaluations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Zero percent evaluations... FOR RATING DISABILITIES General Policy in Rating § 4.31 Zero percent evaluations. In every instance where the schedule does not provide a zero percent evaluation for a diagnostic code, a zero percent...

  2. Quantum interference magnetoconductance of polycrystalline germanium films in the variable-range hopping regime

    NASA Astrophysics Data System (ADS)

    Li, Zhaoguo; Peng, Liping; Zhang, Jicheng; Li, Jia; Zeng, Yong; Zhan, Zhiqiang; Wu, Weidong

    2018-06-01

    Direct evidence of quantum interference magnetotransport in polycrystalline germanium films in the variable-range hopping (VRH) regime is reported. The temperature dependence of the conductivity of germanium films fulfilled the Mott VRH mechanism with the form of ? in the low-temperature regime (?). For the magnetotransport behaviour of our germanium films in the VRH regime, a crossover, from negative magnetoconductance at the low-field to positive magnetoconductance at the high-field, is observed while the zero-field conductivity is higher than the critical value (?). In the regime of ?, the magnetoconductance is positive and quadratic in the field for some germanium films. These features are in agreement with the VRH magnetotransport theory based on the quantum interference effect among random paths in the hopping process.

  3. Generated effect modifiers (GEM's) in randomized clinical trials.

    PubMed

    Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R Todd

    2017-01-01

    In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an "effect modifier". Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Spatiotemporal hurdle models for zero-inflated count data: Exploring trends in emergency department visits.

    PubMed

    Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S

    2016-12-01

    Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.

  5. Simulation and qualitative analysis of glucose variability, mean glucose, and hypoglycemia after subcutaneous insulin therapy for stress hyperglycemia.

    PubMed

    Strilka, Richard J; Stull, Mamie C; Clemens, Michael S; McCaver, Stewart C; Armen, Scott B

    2016-01-27

    The critically ill can have persistent dysglycemia during the "subacute" recovery phase of their illness because of altered gene expression; it is also not uncommon for these patients to receive continuous enteral nutrition during this time. The optimal short-acting subcutaneous insulin therapy that should be used in this clinical scenario, however, is unknown. Our aim was to conduct a qualitative numerical study of the glucose-insulin dynamics within this patient population to answer the above question. This analysis may help clinicians design a relevant clinical trial. Eight virtual patients with stress hyperglycemia were simulated by means of a mathematical model. Each virtual patient had a different combination of insulin resistance and insulin deficiency that defined their unique stress hyperglycemia state; the rate of gluconeogenesis was also doubled. The patients received 25 injections of subcutaneous regular or Lispro insulin (0-6 U) with 3 rates of continuous nutrition. The main outcome measurements were the change in mean glucose concentration, the change in glucose variability, and hypoglycemic episodes. These end points were interpreted by how the ultradian oscillations of glucose concentration were affected by each insulin preparation. Subcutaneous regular insulin lowered both mean glucose concentrations and glucose variability in a linear fashion. No hypoglycemic episodes were noted. Although subcutaneous Lispro insulin lowered mean glucose concentrations, glucose variability increased in a nonlinear fashion. In patients with high insulin resistance and nutrition at goal, "rebound hyperglycemia" was noted after the insulin analog was rapidly metabolized. When the nutritional source was removed, hypoglycemia tended to occur at higher Lispro insulin doses. Finally, patients with severe insulin resistance seemed the most sensitive to insulin concentration changes. Subcutaneous regular insulin consistently lowered mean glucose concentrations and glucose

  6. Optimal design of zero-water discharge rinsing systems.

    PubMed

    Thöming, Jorg

    2002-03-01

    This paper is about zero liquid discharge in processes that use water for rinsing. Emphasis was given to those systems that contaminate process water with valuable process liquor and compounds. The approach involved the synthesis of optimal rinsing and recycling networks (RRN) that had a priori excluded water discharge. The total annualized costs of the RRN were minimized by the use of a mixed-integer nonlinear program (MINLP). This MINLP was based on a hyperstructure of the RRN and contained eight counterflow rinsing stages and three regenerator units: electrodialysis, reverse osmosis, and ion exchange columns. A "large-scale nickel plating process" case study showed that by means of zero-water discharge and optimized rinsing the total waste could be reduced by 90.4% at a revenue of $448,000/yr. Furthermore, with the optimized RRN, the rinsing performance can be improved significantly at a low-cost increase. In all the cases, the amount of valuable compounds reclaimed was above 99%.

  7. Investigating Factorial Invariance of Latent Variables Across Populations When Manifest Variables Are Missing Completely

    PubMed Central

    Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.

    2013-01-01

    Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738

  8. Direct Numerical Simulation of Turbulent Couette-Poiseuille Flow With Zero Skin Friction

    NASA Technical Reports Server (NTRS)

    Coleman, Gary N.; Spalart, Philippe R.

    2015-01-01

    The near-wall scaling of mean velocity U(yw) is addressed for the case of zero skin friction on one wall of a fully turbulent channel flow. The present DNS results can be added to the evidence in support of the conjecture that U is proportional to the square root of yw in the region just above the wall at which the mean shear dU=dy = 0.

  9. A randomized pilot study comparing zero-calorie alternate-day fasting to daily caloric restriction in adults with obesity.

    PubMed

    Catenacci, Victoria A; Pan, Zhaoxing; Ostendorf, Danielle; Brannon, Sarah; Gozansky, Wendolyn S; Mattson, Mark P; Martin, Bronwen; MacLean, Paul S; Melanson, Edward L; Troy Donahoo, William

    2016-09-01

    To evaluate the safety and tolerability of alternate-day fasting (ADF) and to compare changes in weight, body composition, lipids, and insulin sensitivity index (Si) with those produced by a standard weight loss diet, moderate daily caloric restriction (CR). Adults with obesity (BMI ≥30 kg/m(2) , age 18-55) were randomized to either zero-calorie ADF (n = 14) or CR (-400 kcal/day, n = 12) for 8 weeks. Outcomes were measured at the end of the 8-week intervention and after 24 weeks of unsupervised follow-up. No adverse effects were attributed to ADF, and 93% completed the 8-week ADF protocol. At 8 weeks, ADF achieved a 376 kcal/day greater energy deficit; however, there were no significant between-group differences in change in weight (mean ± SE; ADF -8.2 ± 0.9 kg, CR -7.1 ± 1.0 kg), body composition, lipids, or Si. After 24 weeks of unsupervised follow-up, there were no significant differences in weight regain; however, changes from baseline in % fat mass and lean mass were more favorable in ADF. ADF is a safe and tolerable approach to weight loss. ADF produced similar changes in weight, body composition, lipids, and Si at 8 weeks and did not appear to increase risk for weight regain 24 weeks after completing the intervention. © 2016 The Obesity Society.

  10. Proper and improper zero energy modes in Hartree-Fock theory and their relevance for symmetry breaking and restoration.

    PubMed

    Cui, Yao; Bulik, Ireneusz W; Jiménez-Hoyos, Carlos A; Henderson, Thomas M; Scuseria, Gustavo E

    2013-10-21

    We study the spectra of the molecular orbital Hessian (stability matrix) and random-phase approximation (RPA) Hamiltonian of broken-symmetry Hartree-Fock solutions, focusing on zero eigenvalue modes. After all negative eigenvalues are removed from the Hessian by following their eigenvectors downhill, one is left with only positive and zero eigenvalues. Zero modes correspond to orbital rotations with no restoring force. These rotations determine states in the Goldstone manifold, which originates from a spontaneously broken continuous symmetry in the wave function. Zero modes can be classified as improper or proper according to their different mathematical and physical properties. Improper modes arise from symmetry breaking and their restoration always lowers the energy. Proper modes, on the other hand, correspond to degeneracies of the wave function, and their symmetry restoration does not necessarily lower the energy. We discuss how the RPA Hamiltonian distinguishes between proper and improper modes by doubling the number of zero eigenvalues associated with the latter. Proper modes in the Hessian always appear in pairs which do not double in RPA. We present several pedagogical cases exemplifying the above statements. The relevance of these results for projected Hartree-Fock methods is also addressed.

  11. Atomic motion from the mean square displacement in a monatomic liquid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Duane C.; De Lorenzi-Venneri, Giulia; Chisolm, Eric D.

    V-T theory is constructed in the many-body Hamiltonian formulation, and is being developed as a novel approach to liquid dynamics theory. In this theory the liquid atomic motion consists of two contributions, normal mode vibrations in a single representative potential energy valley, and transits, which carry the system across boundaries between valleys. The mean square displacement time correlation function (the MSD) is a direct measure of the atomic motion, and our goal is to determine if the V-T formalism can produce a physically sensible account of this motion. We employ molecular dynamics (MD) data for a system representing liquid Na,more » and find the motion evolves in three successive time intervals: on the first 'vibrational' interval, the vibrational motion alone gives a highly accurate account of the MD data; on the second 'crossover' interval, the vibrational MSD saturates to a constant while the transit motion builds up from zero; on the third 'random walk' interval, the transit motion produces a purely diffusive random walk of the vibrational equilibrium positions. Furthermore, this motional evolution agrees with, and adds refinement to, the MSD atomic motion as described by current liquid dynamics theories.« less

  12. Atomic motion from the mean square displacement in a monatomic liquid

    DOE PAGES

    Wallace, Duane C.; De Lorenzi-Venneri, Giulia; Chisolm, Eric D.

    2016-04-08

    V-T theory is constructed in the many-body Hamiltonian formulation, and is being developed as a novel approach to liquid dynamics theory. In this theory the liquid atomic motion consists of two contributions, normal mode vibrations in a single representative potential energy valley, and transits, which carry the system across boundaries between valleys. The mean square displacement time correlation function (the MSD) is a direct measure of the atomic motion, and our goal is to determine if the V-T formalism can produce a physically sensible account of this motion. We employ molecular dynamics (MD) data for a system representing liquid Na,more » and find the motion evolves in three successive time intervals: on the first 'vibrational' interval, the vibrational motion alone gives a highly accurate account of the MD data; on the second 'crossover' interval, the vibrational MSD saturates to a constant while the transit motion builds up from zero; on the third 'random walk' interval, the transit motion produces a purely diffusive random walk of the vibrational equilibrium positions. Furthermore, this motional evolution agrees with, and adds refinement to, the MSD atomic motion as described by current liquid dynamics theories.« less

  13. Physical activity, mindfulness meditation, or heart rate variability biofeedback for stress reduction: a randomized controlled trial.

    PubMed

    van der Zwan, Judith Esi; de Vente, Wieke; Huizink, Anja C; Bögels, Susan M; de Bruin, Esther I

    2015-12-01

    In contemporary western societies stress is highly prevalent, therefore the need for stress-reducing methods is great. This randomized controlled trial compared the efficacy of self-help physical activity (PA), mindfulness meditation (MM), and heart rate variability biofeedback (HRV-BF) in reducing stress and its related symptoms. We randomly allocated 126 participants to PA, MM, or HRV-BF upon enrollment, of whom 76 agreed to participate. The interventions consisted of psycho-education and an introduction to the specific intervention techniques and 5 weeks of daily exercises at home. The PA exercises consisted of a vigorous-intensity activity of free choice. The MM exercises consisted of guided mindfulness meditation. The HRV-BF exercises consisted of slow breathing with a heart rate variability biofeedback device. Participants received daily reminders for their exercises and were contacted weekly to monitor their progress. They completed questionnaires prior to, directly after, and 6 weeks after the intervention. Results indicated an overall beneficial effect consisting of reduced stress, anxiety and depressive symptoms, and improved psychological well-being and sleep quality. No significant between-intervention effect was found, suggesting that PA, MM, and HRV-BF are equally effective in reducing stress and its related symptoms. These self-help interventions provide easily accessible help for people with stress complaints.

  14. Metabolomics variable selection and classification in the presence of observations below the detection limit using an extension of ERp.

    PubMed

    van Reenen, Mari; Westerhuis, Johan A; Reinecke, Carolus J; Venter, J Hendrik

    2017-02-02

    ERp is a variable selection and classification method for metabolomics data. ERp uses minimized classification error rates, based on data from a control and experimental group, to test the null hypothesis of no difference between the distributions of variables over the two groups. If the associated p-values are significant they indicate discriminatory variables (i.e. informative metabolites). The p-values are calculated assuming a common continuous strictly increasing cumulative distribution under the null hypothesis. This assumption is violated when zero-valued observations can occur with positive probability, a characteristic of GC-MS metabolomics data, disqualifying ERp in this context. This paper extends ERp to address two sources of zero-valued observations: (i) zeros reflecting the complete absence of a metabolite from a sample (true zeros); and (ii) zeros reflecting a measurement below the detection limit. This is achieved by allowing the null cumulative distribution function to take the form of a mixture between a jump at zero and a continuous strictly increasing function. The extended ERp approach is referred to as XERp. XERp is no longer non-parametric, but its null distributions depend only on one parameter, the true proportion of zeros. Under the null hypothesis this parameter can be estimated by the proportion of zeros in the available data. XERp is shown to perform well with regard to bias and power. To demonstrate the utility of XERp, it is applied to GC-MS data from a metabolomics study on tuberculosis meningitis in infants and children. We find that XERp is able to provide an informative shortlist of discriminatory variables, while attaining satisfactory classification accuracy for new subjects in a leave-one-out cross-validation context. XERp takes into account the distributional structure of data with a probability mass at zero without requiring any knowledge of the detection limit of the metabolomics platform. XERp is able to identify variables

  15. What a Smile Means: Contextual Beliefs and Facial Emotion Expressions in a Non-verbal Zero-Sum Game

    PubMed Central

    Pádua Júnior, Fábio P.; Prado, Paulo H. M.; Roeder, Scott S.; Andrade, Eduardo B.

    2016-01-01

    Research into the authenticity of facial emotion expressions often focuses on the physical properties of the face while paying little attention to the role of beliefs in emotion perception. Further, the literature most often investigates how people express a pre-determined emotion rather than what facial emotion expressions people strategically choose to express. To fill these gaps, this paper proposes a non-verbal zero-sum game – the Face X Game – to assess the role of contextual beliefs and strategic displays of facial emotion expression in interpersonal interactions. This new research paradigm was used in a series of three studies, where two participants are asked to play the role of the sender (individual expressing emotional information on his/her face) or the observer (individual interpreting the meaning of that expression). Study 1 examines the outcome of the game with reference to the sex of the pair, where senders won more frequently when the pair was comprised of at least one female. Study 2 examines the strategic display of facial emotion expressions. The outcome of the game was again contingent upon the sex of the pair. Among female pairs, senders won the game more frequently, replicating the pattern of results from study 1. We also demonstrate that senders who strategically express an emotion incongruent with the valence of the event (e.g., smile after seeing a negative event) are able to mislead observers, who tend to hold a congruent belief about the meaning of the emotion expression. If sending an incongruent signal helps to explain why female senders win more frequently, it logically follows that female observers were more prone to hold a congruent, and therefore inaccurate, belief. This prospect implies that while female senders are willing and/or capable of displaying fake smiles, paired-female observers are not taking this into account. Study 3 investigates the role of contextual factors by manipulating female observers’ beliefs. When

  16. Insight into Best Variables for COPD Case Identification: A Random Forests Analysis.

    PubMed

    Leidy, Nancy K; Malley, Karen G; Steenrod, Anna W; Mannino, David M; Make, Barry J; Bowler, Russ P; Thomashow, Byron M; Barr, R G; Rennard, Stephen I; Houfek, Julia F; Yawn, Barbara P; Han, Meilan K; Meldrum, Catherine A; Bacci, Elizabeth D; Walsh, John W; Martinez, Fernando

    This study is part of a larger, multi-method project to develop a questionnaire for identifying undiagnosed cases of chronic obstructive pulmonary disease (COPD) in primary care settings, with specific interest in the detection of patients with moderate to severe airway obstruction or risk of exacerbation. To examine 3 existing datasets for insight into key features of COPD that could be useful in the identification of undiagnosed COPD. Random forests analyses were applied to the following databases: COPD Foundation Peak Flow Study Cohort (N=5761), Burden of Obstructive Lung Disease (BOLD) Kentucky site (N=508), and COPDGene® (N=10,214). Four scenarios were examined to find the best, smallest sets of variables that distinguished cases and controls:(1) moderate to severe COPD (forced expiratory volume in 1 second [FEV 1 ] <50% predicted) versus no COPD; (2) undiagnosed versus diagnosed COPD; (3) COPD with and without exacerbation history; and (4) clinically significant COPD (FEV 1 <60% predicted or history of acute exacerbation) versus all others. From 4 to 8 variables were able to differentiate cases from controls, with sensitivity ≥73 (range: 73-90) and specificity >68 (range: 68-93). Across scenarios, the best models included age, smoking status or history, symptoms (cough, wheeze, phlegm), general or breathing-related activity limitation, episodes of acute bronchitis, and/or missed work days and non-work activities due to breathing or health. Results provide insight into variables that should be considered during the development of candidate items for a new questionnaire to identify undiagnosed cases of clinically significant COPD.

  17. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  18. Transcription, intercellular variability and correlated random walk.

    PubMed

    Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar

    2008-11-01

    We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.

  19. Logic circuits from zero forcing.

    PubMed

    Burgarth, Daniel; Giovannetti, Vittorio; Hogben, Leslie; Severini, Simone; Young, Michael

    We design logic circuits based on the notion of zero forcing on graphs; each gate of the circuits is a gadget in which zero forcing is performed. We show that such circuits can evaluate every monotone Boolean function. By using two vertices to encode each logical bit, we obtain universal computation. We also highlight a phenomenon of "back forcing" as a property of each function. Such a phenomenon occurs in a circuit when the input of gates which have been already used at a given time step is further modified by a computation actually performed at a later stage. Finally, we show that zero forcing can be also used to implement reversible computation. The model introduced here provides a potentially new tool in the analysis of Boolean functions, with particular attention to monotonicity. Moreover, in the light of applications of zero forcing in quantum mechanics, the link with Boolean functions may suggest a new directions in quantum control theory and in the study of engineered quantum spin systems. It is an open technical problem to verify whether there is a link between zero forcing and computation with contact circuits.

  20. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  1. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  2. Zero-P: a new zero-profile cage-plate device for single and multilevel ACDF. A single institution series with four years maximum follow-up and review of the literature on zero-profile devices.

    PubMed

    Barbagallo, Giuseppe M V; Romano, Dario; Certo, Francesco; Milone, Pietro; Albanese, Vincenzo

    2013-11-01

    To analyze the prospectively collected data in a series of patients treated with single- or multilevel ACDF with a stand-alone, zero-profile device, focusing on clinico-radiological outcome, complications and technical hints, and to review the literature on such new devices. Eighty-five patients harboring symptomatic DDD underwent ACDF with the Zero-P cage-plate: 29 at 1-level and 56 at 2-4 levels (total 162 devices). In the multilevel group, 9 patients received a combination of Zero-P and stand-alone cages (hybrid implants). This study focuses on 32 patients with follow-up ranging from 20 to 48 months. NDI, SF-36 and arm pain VAS scores were registered preoperatively and at follow-up visits. Dysphagia was assessed using the Bazaz score. Imaging included X-rays, CT and MRI, also to assess the presence of vertebral body fractures in multilevel cases. Paired Student t test was used for statistical analysis. SF-36 and NDI showed a statistically significant improvement (p < 0.01) and mean arm pain VAS score decreased from 79 to 41. X-rays and CT demonstrated, respectively, a 94.5 % and a 92 % fusion rate. Three patients complained of moderate and two of mild transient dysphagia (15.5 %). No device-related complications occurred and no fractures, secondary to four screws insertion in one vertebral body (i.e., swiss cheese effect), were detected in multilevel cases. In patients with extensive anterior osteophytes only a "focal spondylectomy" was required. The Zero-P device is safe and efficient, even in multilevel cases. Dysphagia is minimal, extensive anterior osteophytectomy is unnecessary and technical hints may ease the surgical workflow. This is the largest series, with the longest follow-up, reported.

  3. ZeroCal: Automatic MAC Protocol Calibration

    NASA Astrophysics Data System (ADS)

    Meier, Andreas; Woehrle, Matthias; Zimmerling, Marco; Thiele, Lothar

    Sensor network MAC protocols are typically configured for an intended deployment scenario once and for all at compile time. This approach, however, leads to suboptimal performance if the network conditions deviate from the expectations. We present ZeroCal, a distributed algorithm that allows nodes to dynamically adapt to variations in traffic volume. Using ZeroCal, each node autonomously configures its MAC protocol at runtime, thereby trying to reduce the maximum energy consumption among all nodes. While the algorithm is readily usable for any asynchronous low-power listening or low-power probing protocol, we validate and demonstrate the effectiveness of ZeroCal on X-MAC. Extensive testbed experiments and simulations indicate that ZeroCal quickly adapts to traffic variations. We further show that ZeroCal extends network lifetime by 50% compared to an optimal configuration with identical and static MAC parameters at all nodes.

  4. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    PubMed

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  5. Zero Boil-OFF Tank Hardware Setup

    NASA Image and Video Library

    2017-09-19

    iss053e027051 (Sept. 19, 2017) --- Flight Engineer Joe Acaba works in the U.S. Destiny laboratory module setting up hardware for the Zero Boil-Off Tank (ZBOT) experiment. ZBOT uses an experimental fluid to test active heat removal and forced jet mixing as alternative means for controlling tank pressure for volatile fluids. Rocket fuel, spacecraft heating and cooling systems, and sensitive scientific instruments rely on very cold cryogenic fluids. Heat from the environment around cryogenic tanks can cause their pressures to rise, which requires dumping or "boiling off" fluid to release the excess pressure, or actively cooling the tanks in some way.

  6. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  7. Land surface temperature over global deserts: Means, variability, and trends

    NASA Astrophysics Data System (ADS)

    Zhou, Chunlüe; Wang, Kaicun

    2016-12-01

    Land surface air temperature (LSAT) has been a widely used metric to study climate change. Weather observations of LSAT are the fundamental data for climate change studies and provide key evidence of global warming. However, there are very few meteorological observations over deserts due to their uninhabitable environment. This study fills this gap and provides independent evidence using satellite-derived land surface temperatures (LSTs), benefiting from their global coverage. The frequency of clear sky from MODerate Resolution Imaging Spectroradiometer (MODIS) LST data over global deserts was found to be greater than 94% for the 2002-2015 period. Our results show that MODIS LST has a bias of 1.36°C compared to ground-based observations collected at 31 U.S. Climate Reference Network (USCRN) stations, with a standard deviation of 1.83°C. After bias correction, MODIS LST was used to evaluate existing reanalyses, including ERA-Interim, Japanese 55-year Reanalysis (JRA-55), Modern-Era Retrospective Analysis for Research and Applications (MERRA), MERRA-land, National Centers for Environmental Prediction (NCEP)-R1, and NCEP-R2. The reanalyses accurately reproduce the seasonal cycle and interannual variability of the LSTs, but their multiyear means and trends of LSTs exhibit large uncertainties. The multiyear averaged LST over global deserts is 23.5°C from MODIS and varies from 20.8°C to 24.5°C in different reanalyses. The MODIS LST over global deserts increased by 0.25°C/decade from 2002 to 2015, whereas the reanalyses estimated a trend varying from -0.14 to 0.10°C/decade. The underestimation of the LST trend by the reanalyses occurs for approximately 70% of the global deserts, likely due to the imperfect performance of the reanalyses in reproducing natural climate variability.

  8. Biologically-variable rhythmic auditory cues are superior to isochronous cues in fostering natural gait variability in Parkinson's disease.

    PubMed

    Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S

    2017-01-01

    Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p's<0.05). In contrast, cueing with isochronous or randomly varying inter-stimulus/beat intervals removed the LRC in the stride cycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p's<0.05) but not the positive effect of biological variability. Stimulus variability and patients' propensity to synchronize play a critical role in fostering healthier gait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. δ-exceedance records and random adaptive walks

    NASA Astrophysics Data System (ADS)

    Park, Su-Chan; Krug, Joachim

    2016-08-01

    We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.

  10. Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.

    PubMed

    Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal

    2016-11-01

    Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

  11. Anomalous diffusion on a random comblike structure

    NASA Astrophysics Data System (ADS)

    Havlin, Shlomo; Kiefer, James E.; Weiss, George H.

    1987-08-01

    We have recently studied a random walk on a comblike structure as an analog of diffusion on a fractal structure. In our earlier work, the comb was assumed to have a deterministic structure, the comb having teeth of infinite length. In the present paper we study diffusion on a one-dimensional random comb, the length of whose teeth are random variables with an asymptotic stable law distribution φ(L)~L-(1+γ) where 0<γ<=1. Two mean-field methods are used for the analysis, one based on the continuous-time random walk, and the second a self-consistent scaling theory. Both lead to the same conclusions. We find that the diffusion exponent characterizing the mean-square displacement along the backbone of the comb is dw=4/(1+γ) for γ<1 and dw=2 for γ>=1. The probability of being at the origin at time t is P0(t)~t-ds/2 for large t with ds=(3-γ)/2 for γ<1 and ds=1 for γ>1. When a field is applied along the backbone of the comb the diffusion exponent is dw=2/(1+γ) for γ<1 and dw=1 for γ>=1. The theoretical results are confirmed using the exact enumeration method.

  12. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  13. An AUC-based permutation variable importance measure for random forests

    PubMed Central

    2013-01-01

    Background The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. Results We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. Conclusions The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html. PMID:23560875

  14. An AUC-based permutation variable importance measure for random forests.

    PubMed

    Janitza, Silke; Strobl, Carolin; Boulesteix, Anne-Laure

    2013-04-05

    The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html.

  15. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: an instrumental variables re-analysis of randomized clinical trials.

    PubMed

    Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H

    2014-11-01

    Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.

  16. Investigation of spectral analysis techniques for randomly sampled velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, Dave

    1993-01-01

    It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable

  17. Module theoretic zero structures for system matrices

    NASA Technical Reports Server (NTRS)

    Wyman, Bostwick F.; Sain, Michael K.

    1987-01-01

    The coordinate-free module-theoretic treatment of transmission zeros for MIMO transfer functions developed by Wyman and Sain (1981) is generalized to include noncontrollable and nonobservable linear dynamical systems. Rational, finitely-generated-modular, and torsion-divisible interpretations of the Rosenbrock system matrix are presented; Gamma-zero and Omega-zero modules are defined and shown to contain the output-decoupling and input-decoupling zero modules, respectively, as submodules; and the cases of left and right invertible transfer functions are considered.

  18. Crew Training - STS-30/61B (Zero-G)

    NASA Image and Video Library

    1985-08-21

    KC-135 inflight training of the STS-30/61B Crew for suit donning doffing and Zero-G orientation for Rudolfo Neri, Astronaut Mary Cleave, and Ricardo Peralta, Backup Neri. 1. Astronaut Cleave, Mary - Zero-G 2. Neri, Rodolfo - Zero-G 3. Peralta, Ricard - Zero-G

  19. Bayesian models for cost-effectiveness analysis in the presence of structural zero costs

    PubMed Central

    Baio, Gianluca

    2014-01-01

    Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24343868

  20. Bayesian models for cost-effectiveness analysis in the presence of structural zero costs.

    PubMed

    Baio, Gianluca

    2014-05-20

    Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  1. Zero ischemia robotic-assisted partial nephrectomy in Alberta: Initial results of a novel approach.

    PubMed

    Forbes, Ellen; Cheung, Douglas; Kinnaird, Adam; Martin, Blair St

    2015-01-01

    Partial nephrectomy remains the standard of care in early stage, organ-confined renal tumours. Recent evidence suggests that minimally invasive surgery can proceed without segmental vessel clamping. In this study, we review our experience at a Canadian centre with zero ischemia robotic-assisted partial nephrectomy (RAPN). A retrospective chart review of zero ischemia RAPN was performed. All surgeries were consecutive partial nephrectomies performed by the same surgeon at a tertiary care centre in Northern Alberta. The mean follow-up period was 28 months. These outcomes were compared against the current standards for zero ischemia (as outlined by the University of Southern California Institute of Urology [USC]). We included 21 patients who underwent zero ischemia RAPN between January 2012 and June 2013. Baseline data were similar to contemporary studies. Twelve (57.1%) required no vascular clamping, 7 (33.3%) required clamping of a single segmental artery, and 2 (9.5%) required clamping of two segmental arteries. We achieved an average estimated blood loss of 158 cc, with a 9.2% average increase in creatinine postoperatively. Operating time and duration of hospital stay were short at 153 minutes and 2.2 days, respectively. Zero ischemia partial nephrectomy was a viable option at our institution with favourable results in terms of intra-operative blood loss and postoperative creatinine change compared to results from contemporary standard zero ischemia studies (USC). To our knowledge, this is the first study to review an initial experience with the zero ischemia protocol in robotic-assisted partial nephrectomies at a Canadian hospital.

  2. Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue

    NASA Astrophysics Data System (ADS)

    Kree, P.; Soize, C.

    The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.

  3. Exact zeros of entanglement for arbitrary rank-two mixtures derived from a geometric view of the zero polytope

    NASA Astrophysics Data System (ADS)

    Osterloh, Andreas

    2016-12-01

    Here I present a method for how intersections of a certain density matrix of rank 2 with the zero polytope can be calculated exactly. This is a purely geometrical procedure which thereby is applicable to obtaining the zeros of SL- and SU-invariant entanglement measures of arbitrary polynomial degree. I explain this method in detail for a recently unsolved problem. In particular, I show how a three-dimensional view, namely, in terms of the Bloch-sphere analogy, solves this problem immediately. To this end, I determine the zero polytope of the three-tangle, which is an exact result up to computer accuracy, and calculate upper bounds to its convex roof which are below the linearized upper bound. The zeros of the three-tangle (in this case) induced by the zero polytope (zero simplex) are exact values. I apply this procedure to a superposition of the four-qubit Greenberger-Horne-Zeilinger and W state. It can, however, be applied to every case one has under consideration, including an arbitrary polynomial convex-roof measure of entanglement and for arbitrary local dimension.

  4. Parafermionic zero modes in gapless edge states

    NASA Astrophysics Data System (ADS)

    Clarke, David

    It has been recently demonstrated1 that Majorana zero modes may occur in the gapless edge of Abelian quantum Hall states at a boundary between different edge phases bordering the same bulk. Such a zero mode is guaranteed to occur when an edge phase that supports fermionic excitations borders one that does not. Here we generalize to the non-charge conserving case such as may occur when a superconductor abuts the quantum Hall edge. We find that not only Majorana zero modes, but their ℤN generalizations (known as parafermionic zero modes) may occur at boundaries between edge phases in a fractional quantum Hall state. In particular, we find thst the ν = 1 / 3 fractional quantum Hall state supports topologically distinct edge phases separated by ℤ3 parafermionic zero modes when charge conservation is broken. Paradoxically, an arrangement of phases can be made such that only an odd number of localized parafermionic zero modes occur around the edge of a quantum Hall droplet. Such an arrangement is not allowed in a gapped system, but here the paradox is resolved due to an extended zero mode in the edge spectrum. LPS-MPO-CMTC, JQI-NSF-PFC, Microsoft Station Q.

  5. Heart rate variability biofeedback in patients with alcohol dependence: a randomized controlled study

    PubMed Central

    Penzlin, Ana Isabel; Siepmann, Timo; Illigens, Ben Min-Woo; Weidner, Kerstin; Siepmann, Martin

    2015-01-01

    Background and objective In patients with alcohol dependence, ethyl-toxic damage of vasomotor and cardiac autonomic nerve fibers leads to autonomic imbalance with neurovascular and cardiac dysfunction, the latter resulting in reduced heart rate variability (HRV). Autonomic imbalance is linked to increased craving and cardiovascular mortality. In this study, we sought to assess the effects of HRV biofeedback training on HRV, vasomotor function, craving, and anxiety. Methods We conducted a randomized controlled study in 48 patients (14 females, ages 25–59 years) undergoing inpatient rehabilitation treatment. In the treatment group, patients (n=24) attended six sessions of HRV biofeedback over 2 weeks in addition to standard rehabilitative care, whereas, in the control group, subjects received standard care only. Psychometric testing for craving (Obsessive Compulsive Drinking Scale), anxiety (Symptom Checklist-90-Revised), HRV assessment using coefficient of variation of R-R intervals (CVNN) analysis, and vasomotor function assessment using laser Doppler flowmetry were performed at baseline, immediately after completion of treatment or control period, and 3 and 6 weeks afterward (follow-ups 1 and 2). Results Psychometric testing showed decreased craving in the biofeedback group immediately postintervention (OCDS scores: 8.6±7.9 post-biofeedback versus 13.7±11.0 baseline [mean ± standard deviation], P<0.05), whereas craving was unchanged at this time point in the control group. Anxiety was reduced at follow-ups 1 and 2 post-biofeedback, but was unchanged in the control group (P<0.05). Following biofeedback, CVNN tended to be increased (10.3%±2.8% post-biofeedback, 10.1%±3.5% follow-up 1, 10.1%±2.9% follow-up 2 versus 9.7%±3.6% baseline; P=not significant). There was no such trend in the control group. Vasomotor function assessed using the mean duration to 50% vasoconstriction of cutaneous vessels after deep inspiration was improved following biofeedback

  6. Empirical variability in the calibration of slope-based eccentric photorefraction

    PubMed Central

    Bharadwaj, Shrikant R.; Sravani, N. Geetha; Little, Julie-Anne; Narasaiah, Asa; Wong, Vivian; Woodburn, Rachel; Candy, T. Rowan

    2014-01-01

    Refraction estimates from eccentric infrared (IR) photorefraction depend critically on the calibration of luminance slopes in the pupil. While the intersubject variability of this calibration has been estimated, there is no systematic evaluation of its intrasubject variability. This study determined the within subject inter- and intra-session repeatability of this calibration factor and the optimum range of lenses needed to derive this value. Relative calibrations for the MCS PowerRefractor and a customized photorefractor were estimated twice within one session or across two sessions by placing trial lenses before one eye covered with an IR transmitting filter. The data were subsequently resampled with various lens combinations to determine the impact of lens power range on the calibration estimates. Mean (±1.96 SD) calibration slopes were 0.99 ± 0.39 for North Americans with the MCS PowerRefractor (relative to its built-in value) and 0.65 ± 0.25 Ls/D and 0.40 ± 0.09 Ls/D for Indians and North Americans with the custom photorefractor, respectively. The ±95% limits of agreement of intrasubject variability ranged from ±0.39 to ±0.56 for the MCS PowerRefractor and ±0.03 Ls/D to ±0.04 Ls/D for the custom photorefractor. The mean differences within and across sessions were not significantly different from zero (p > 0.38 for all). The combined intersubject and intrasubject variability of calibration is therefore about ±40% of the mean value, implying that significant errors in individual refraction/accommodation estimates may arise if a group-average calibration is used. Protocols containing both plus and minus lenses had calibration slopes closest to the gold-standard protocol, suggesting that they may provide the best estimate of the calibration factor compared to those containing either plus or minus lenses. PMID:23695324

  7. Interannual variability in global mean sea level estimated from the CESM Large and Last Millennium Ensembles

    DOE PAGES

    Fasullo, John T.; Nerem, Robert S.

    2016-10-31

    To better understand global mean sea level (GMSL) as an indicator of climate variability and change, contributions to its interannual variation are quantified in the Community Earth System Model (CESM) Large Ensemble and Last Millennium Ensemble. Consistent with expectations, the El Niño/Southern Oscillation (ENSO) is found to exert a strong influence due to variability in rainfall over land (PL) and terrestrial water storage (TWS). Other important contributors include changes in ocean heat content (OHC) and precipitable water (PW). The temporal evolution of individual contributing terms is documented. The magnitude of peak GMSL anomalies associated with ENSO generally are of themore » order of 0.5 mm·K -1 with significant inter-event variability, with a standard deviation (σ) that is about half as large The results underscore the exceptional rarity of the 2010/2011 La Niña-related GMSL drop and estimate the frequency of such an event to be about only once in every 75 years. In addition to ENSO, major volcanic eruptions are found to be a key driver of interannual variability. Associated GMSL variability contrasts with that of ENSO as TWS and PW anomalies initially offset the drop due to OHC reductions but short-lived relative to them. Furthermore, responses up to 25 mm are estimated for the largest eruptions of the Last Millennium.« less

  8. Interannual variability in global mean sea level estimated from the CESM Large and Last Millennium Ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasullo, John T.; Nerem, Robert S.

    To better understand global mean sea level (GMSL) as an indicator of climate variability and change, contributions to its interannual variation are quantified in the Community Earth System Model (CESM) Large Ensemble and Last Millennium Ensemble. Consistent with expectations, the El Niño/Southern Oscillation (ENSO) is found to exert a strong influence due to variability in rainfall over land (PL) and terrestrial water storage (TWS). Other important contributors include changes in ocean heat content (OHC) and precipitable water (PW). The temporal evolution of individual contributing terms is documented. The magnitude of peak GMSL anomalies associated with ENSO generally are of themore » order of 0.5 mm·K -1 with significant inter-event variability, with a standard deviation (σ) that is about half as large The results underscore the exceptional rarity of the 2010/2011 La Niña-related GMSL drop and estimate the frequency of such an event to be about only once in every 75 years. In addition to ENSO, major volcanic eruptions are found to be a key driver of interannual variability. Associated GMSL variability contrasts with that of ENSO as TWS and PW anomalies initially offset the drop due to OHC reductions but short-lived relative to them. Furthermore, responses up to 25 mm are estimated for the largest eruptions of the Last Millennium.« less

  9. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  10. Synthesis of hover autopilots for rotary-wing VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hall, W. E.; Bryson, A. E., Jr.

    1972-01-01

    The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.

  11. More about unphysical zeroes in quark mass matrices

    NASA Astrophysics Data System (ADS)

    Emmanuel-Costa, David; González Felipe, Ricardo

    2017-01-01

    We look for all weak bases that lead to texture zeroes in the quark mass matrices and contain a minimal number of parameters in the framework of the standard model. Since there are ten physical observables, namely, six nonvanishing quark masses, three mixing angles and one CP phase, the maximum number of texture zeroes in both quark sectors is altogether nine. The nine zero entries can only be distributed between the up- and down-quark sectors in matrix pairs with six and three texture zeroes or five and four texture zeroes. In the weak basis where a quark mass matrix is nonsingular and has six zeroes in one sector, we find that there are 54 matrices with three zeroes in the other sector, obtainable through right-handed weak basis transformations. It is also found that all pairs composed of a nonsingular matrix with five zeroes and a nonsingular and nondecoupled matrix with four zeroes simply correspond to a weak basis choice. Without any further assumptions, none of these pairs of up- and down-quark mass matrices has physical content. It is shown that all non-weak-basis pairs of quark mass matrices that contain nine zeroes are not compatible with current experimental data. The particular case of the so-called nearest-neighbour-interaction pattern is also discussed.

  12. Projection correlation between two random vectors.

    PubMed

    Zhu, Liping; Xu, Kai; Li, Runze; Zhong, Wei

    2017-12-01

    We propose the use of projection correlation to characterize dependence between two random vectors. Projection correlation has several appealing properties. It equals zero if and only if the two random vectors are independent, it is not sensitive to the dimensions of the two random vectors, it is invariant with respect to the group of orthogonal transformations, and its estimation is free of tuning parameters and does not require moment conditions on the random vectors. We show that the sample estimate of the projection correction is [Formula: see text]-consistent if the two random vectors are independent and root-[Formula: see text]-consistent otherwise. Monte Carlo simulation studies indicate that the projection correlation has higher power than the distance correlation and the ranks of distances in tests of independence, especially when the dimensions are relatively large or the moment conditions required by the distance correlation are violated.

  13. Impact of Flavonols on Cardiometabolic Biomarkers: A Meta-Analysis of Randomized Controlled Human Trials to Explore the Role of Inter-Individual Variability

    PubMed Central

    Menezes, Regina; Rodriguez-Mateos, Ana; Kaltsatou, Antonia; González-Sarrías, Antonio; Greyling, Arno; Giannaki, Christoforos; Andres-Lacueva, Cristina; Milenkovic, Dragan; Gibney, Eileen R.; Dumont, Julie; Schär, Manuel; Garcia-Aloy, Mar; Palma-Duran, Susana Alejandra; Ruskovska, Tatjana; Maksimova, Viktorija; Combet, Emilie; Pinto, Paula

    2017-01-01

    Several epidemiological studies have linked flavonols with decreased risk of cardiovascular disease (CVD). However, some heterogeneity in the individual physiological responses to the consumption of these compounds has been identified. This meta-analysis aimed to study the effect of flavonol supplementation on biomarkers of CVD risk such as, blood lipids, blood pressure and plasma glucose, as well as factors affecting their inter-individual variability. Data from 18 human randomized controlled trials were pooled and the effect was estimated using fixed or random effects meta-analysis model and reported as difference in means (DM). Variability in the response of blood lipids to supplementation with flavonols was assessed by stratifying various population subgroups: age, sex, country, and health status. Results showed significant reductions in total cholesterol (DM = −0.10 mmol/L; 95% CI: −0.20, −0.01), LDL cholesterol (DM = −0.14 mmol/L; 95% CI: −0.21, 0.07), and triacylglycerol (DM = −0.10 mmol/L; 95% CI: −0.18, 0.03), and a significant increase in HDL cholesterol (DM = 0.05 mmol/L; 95% CI: 0.02, 0.07). A significant reduction was also observed in fasting plasma glucose (DM = −0.18 mmol/L; 95% CI: −0.29, −0.08), and in blood pressure (SBP: DM = −4.84 mmHg; 95% CI: −5.64, −4.04; DBP: DM = −3.32 mmHg; 95% CI: −4.09, −2.55). Subgroup analysis showed a more pronounced effect of flavonol intake in participants from Asian countries and in participants with diagnosed disease or dyslipidemia, compared to healthy and normal baseline values. In conclusion, flavonol consumption improved biomarkers of CVD risk, however, country of origin and health status may influence the effect of flavonol intake on blood lipid levels. PMID:28208791

  14. Bell-Boole Inequality: Nonlocality or Probabilistic Incompatibility of Random Variables?

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2008-06-01

    The main aim of this report is to inform the quantum information community about investigations on the problem of probabilistic compatibility of a family of random variables: a possibility to realize such a family on the basis of a single probability measure (to construct a single Kolmogorov probability space). These investigations were started hundred of years ago by J. Boole (who invented Boolean algebras). The complete solution of the problem was obtained by Soviet mathematician Vorobjev in 60th. Surprisingly probabilists and statisticians obtained inequalities for probabilities and correlations among which one can find the famous Bell’s inequality and its generalizations. Such inequalities appeared simply as constraints for probabilistic compatibility. In this framework one can not see a priori any link to such problems as nonlocality and “death of reality” which are typically linked to Bell’s type inequalities in physical literature. We analyze the difference between positions of mathematicians and quantum physicists. In particular, we found that one of the most reasonable explanations of probabilistic incompatibility is mixing in Bell’s type inequalities statistical data from a number of experiments performed under different experimental contexts.

  15. Variable stars in the Pegasus dwarf galaxy (DDO 216)

    NASA Technical Reports Server (NTRS)

    Hoessel, J. G.; Abbott, Mark J.; Saha, A.; Mossman, Amy E.; Danielson, G. Edward

    1990-01-01

    Observations obtained over a period of five years of the resolved stars in the Pegasus dwarf irregular galaxy (DDO 216) have been searched for variable stars. Thirty-one variables were found, and periods established for 12. Two of these variable stars are clearly eclipsing variables, seven are very likely Cepheid variables, and the remaining three are probable Cepheids. The period-luminosity relation for the Cepheids indicates a distance modulus for Pegasus of m - M = 26.22 + or - 0.20. This places Pegasus very near the zero-velocity surface of the Local Group.

  16. Nonrecurrence and Bell-like inequalities

    NASA Astrophysics Data System (ADS)

    Danforth, Douglas G.

    2017-12-01

    The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.

  17. Characteristics of zero-absenteeism in hospital care.

    PubMed

    Schreuder, J A H; Roelen, C A M; van der Klink, J J L; Groothoff, J W

    2013-06-01

    Literature on sickness presenteeism is emerging, but still little is known about employees who are never absent from work due to injuries or illness. Insight into the determinants and characteristics of such zero-absentees may provide clues for preventing sickness absence. To investigate the characteristics of zero-absentees, defined as employees without sickness absence over a period of 5 years. A mixed-method qualitative study comprising semi-structured interviews and focus groups for which Azjen and Fishbein's theory of planned behaviour was used as a framework. Zero-absentees working in hospital care were invited for semi-structured interviews until saturation was reached. The results of semi-structured interviews were validated in two focus groups. Of 1053 hospital employees, 47 were zero-absentees of whom 31 (66%) agreed to participate in the study. After 16 semi-structured interviews, no new insights or information were gathered from the interviews. The remaining 15 employees were invited to two (n = 8 and n = 7) focus groups. Personal attitudes and self-efficacy were more important in zero-absenteeism than social pressures of managers, colleagues or patients. Zero-absentees were found to be intrinsically motivated to try attending work when ill. In the present study population of hospital employees, we found indications that zero-absenteeism and sickness presenteeism might be different types of work attendance. Managers should realize that zero-absentees are driven by intrinsic motivation rather than social pressures to attend work.

  18. Noniterative computation of infimum in H(infinity) optimisation for plants with invariant zeros on the j(omega)-axis

    NASA Technical Reports Server (NTRS)

    Chen, B. M.; Saber, A.

    1993-01-01

    A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.

  19. Zero-Valent Metal Emulsion for Reductive Dehalogenation of DNAPLs

    NASA Technical Reports Server (NTRS)

    Reinhart, Debra R. (Inventor); Clausen, Christian (Inventor); Gelger, Cherie L. (Inventor); Quinn, Jacqueline (Inventor); Brooks, Kathleen (Inventor)

    2006-01-01

    A zero-valent metal emulsion is used to dehalogenate solvents, such as pooled dense non-aqueous phase liquids (DNAPLs), including trichloroethylene (TCE). The zero-valent metal emulsion contains zero-valent metal particles, a surfactant, oil and water, The preferred zero-valent metal particles are nanoscale and microscale zero-valent iron particles.

  20. Zero-Valent Metal Emulsion for Reductive Dehalogenation of DNAPLS

    NASA Technical Reports Server (NTRS)

    Reinhart, Debra R. (Inventor); Clausen, Christian (Inventor); Geiger, Cherie L. (Inventor); Quinn, Jacqueline (Inventor); Brooks, Kathleen (Inventor)

    2003-01-01

    A zero-valent metal emulsion is used to dehalogenate solvents, such as pooled dense non-aqueous phase liquids (DNAPLs), including trichloroethylene (TCE). The zero-valent metal emulsion contains zero-valent metal particles, a surfactant, oil and water. The preferred zero-valent metal particles are nanoscale and microscale zero-valent iron particles

  1. Microstructure from ferroelastic transitions using strain pseudospin clock models in two and three dimensions: A local mean-field analysis

    NASA Astrophysics Data System (ADS)

    Vasseur, Romain; Lookman, Turab; Shenoy, Subodh R.

    2010-09-01

    We show how microstructure can arise in first-order ferroelastic structural transitions, in two and three spatial dimensions, through a local mean-field approximation of their pseudospin Hamiltonians, that include anisotropic elastic interactions. Such transitions have symmetry-selected physical strains as their NOP -component order parameters, with Landau free energies that have a single zero-strain “austenite” minimum at high temperatures, and spontaneous-strain “martensite” minima of NV structural variants at low temperatures. The total free energy also has gradient terms, and power-law anisotropic effective interactions, induced by “no-dislocation” St Venant compatibility constraints. In a reduced description, the strains at Landau minima induce temperature dependent, clocklike ZNV+1 Hamiltonians, with NOP -component strain-pseudospin vectors S⃗ pointing to NV+1 discrete values (including zero). We study elastic texturing in five such first-order structural transitions through a local mean-field approximation of their pseudospin Hamiltonians, that include the power-law interactions. As a prototype, we consider the two-variant square/rectangle transition, with a one-component pseudospin taking NV+1=3 values of S=0,±1 , as in a generalized Blume-Capel model. We then consider transitions with two-component (NOP=2) pseudospins: the equilateral to centered rectangle (NV=3) ; the square to oblique polygon (NV=4) ; the triangle to oblique (NV=6) transitions; and finally the three-dimensional (3D) cubic to tetragonal transition (NV=3) . The local mean-field solutions in two-dimensional and 3D yield oriented domain-wall patterns as from continuous-variable strain dynamics, showing the discrete-variable models capture the essential ferroelastic texturings. Other related Hamiltonians illustrate that structural transitions in materials science can be the source of interesting spin models in statistical mechanics.

  2. Zero-Power Radio Device.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brocato, Robert W.

    This report describes an unpowered radio receiver capable of detecting and responding to weak signals transmit ted from comparatively long distances . This radio receiver offers key advantages over a short range zero - power radio receiver previously described in SAND2004 - 4610, A Zero - Power Radio Receiver . The device described here can be fabricated as an integrated circuit for use in portable wireless devices, as a wake - up circuit, or a s a stand - alone receiver operating in conjunction with identification decoders or other electroni cs. It builds on key sub - components developed atmore » Sandia National Laboratories over many years. It uses surface acoustic wave (SAW) filter technology. It uses custom component design to enable the efficient use of small aperture antennas. This device uses a key component, the pyroelectric demodulator , covered by Sandia owned U.S. Patent 7397301, Pyroelectric Demodulating Detector [1] . This device is also described in Sandia owned U.S. Patent 97266446, Zero Power Receiver [2].« less

  3. "Zeroing" in on mathematics in the monkey brain.

    PubMed

    Beran, Michael J

    2016-03-01

    A new study documented that monkeys showed selective neuronal responding to the concept of zero during a numerical task, and that there were two distinct classes of neurons that coded the absence of stimuli either through a discrete activation pattern (zero or not zero) or a continuous one for which zero was integrated with other numerosities in the relative rate of activity. These data indicate that monkeys, like humans, have a concept of zero that is part of their analog number line but that also may have unique properties compared to other numerosities.

  4. (In)Tolerable Zero Tolerance Policy

    ERIC Educational Resources Information Center

    Dickerson, Sean L.

    2014-01-01

    The spread of zero tolerance policies for school-based scenarios flourished under President William J. Clinton who wanted to close a loophole in the Guns-Free School Zones Act of 1990. Expansion in the coverage of zero tolerance policy to offenses outside the initial scope of weapon and drug offenses has led to a disproportional ratio of African…

  5. A synthetic zero air standard

    NASA Astrophysics Data System (ADS)

    Pearce, Ruth

    2016-04-01

    A Synthetic Zero Air Standard R. E. Hill-Pearce, K. V. Resner, D. R. Worton, P. J. Brewer The National Physical Laboratory Teddington, Middlesex TW11 0LW UK We present work towards providing traceability for measurements of high impact greenhouse gases identified by the World Meteorological Organisation (WMO) as critical for global monitoring. Standards for these components are required with challengingly low uncertainties to improve the quality assurance and control processes used for the global networks to better assess climate trends. Currently the WMO compatibility goals require reference standards with uncertainties of < 100 nmolmol-1 for CO2 (northern hemisphere) and < 2 nmolmol-1 for CH4 and CO. High purity zero gas is required for both the balance gas in the preparation of reference standards and for baseline calibrations of instrumentation. Quantification of the amount fraction of the target components in the zero gas is a significant contributor to the uncertainty and is challenging due to limited availability of reference standard at the amount fraction of the measurand and limited analytical techniques with sufficient detection limits. A novel dilutor was used to blend NPL Primary Reference Gas Mixtures containing CO2, CH4 and CO at atmospheric amount fractions with a zero gas under test. Several mixtures were generated with nominal dilution ratios ranging from 2000:1 to 350:1. The baseline of two cavity ring down spectrometers was calibrated using the zero gas under test after purification by oxidative removal of CO and hydrocarbons to < 1 nmolmol-1 (SAES PS15-GC50) followed by the removal of CO2 and water vapour to < 100 pmolmol-1 (SAES MC190). Using the standard addition method.[1] we have quantified the amount fraction of CO, CO2, and CH4 in scrubbed whole air (Scott Marrin) and NPL synthetic zero air. This is the first synthetic zero air standard with a matrix of N2, O2 and Ar closely matching ambient composition with gravimetrically assigned

  6. Effects of zero reference position on bladder pressure measurements: an observational study.

    PubMed

    Soler Morejón, Caridad De Dios; Lombardo, Tomás Ariel; Tamargo Barbeito, Teddy Osmin; Sandra, Barquín García

    2012-07-05

    Although the World Society for Abdominal Compartment Syndrome in its guidelines recommends midaxillary line (MAL) as zero reference level in intra-abdominal pressure (IAP) measurements in aiming at standardizing the technique, evidence supporting this suggestion is scarce. The aim of this study is to study if the zero reference position influences bladder pressure measurements as estimate for IAP. The IAP of 100 surgical patients was measured during the first 24 h of admission to the surgical intensive care unit of General Calixto Garcia Hospital in Havana (Cuba) following laparotomy. The period was January 2009 to January 2010. The IAP was measured twice with a six-hour interval using the transurethral technique with a priming volume of 25 ml. IAP was first measured with the zero reference level placed at MAL (IAPMAL), followed by a second measurement at the level of the symphysis pubis (SP) after 3 minutes (IAPSP). Correlations were made between IAP and body mass index (BMI), type of surgery, gender, and age. Mean IAPMAL was 8.5 ± 2.8 mmHg vs. IAPSP 6.5 ± 2.8 mmHg (p < 0.0001). The bias between measurements was 2.0 ± 1.5, 95% confidence interval of 1.4 to 3.0, upper limit of 4.9, lower limit of -0.9, and a percentage error of 35.1%. IAPMAL was consistently higher than IAPSP regardless of the type of surgery. The BMI correlated with IAP values regardless of the zero reference level (R2 = 0.4 and 0.3 with IAPMAL and IAPSP respectively, p < 0.0001). The zero reference level has an important impact on IAP measurement in surgical patients after laparotomy and can potentially lead to over or underestimation. Further anthropometric studies are needed with regard to the relative MAL and SP zero reference position in relation to the theoretical ideal reference level at midpoint of the abdomen. Until better evidence is available, MAL remains the recommended zero reference position due to its best anatomical localization at iliac crest.

  7. Effects of zero reference position on bladder pressure measurements: an observational study

    PubMed Central

    2012-01-01

    Background Although the World Society for Abdominal Compartment Syndrome in its guidelines recommends midaxillary line (MAL) as zero reference level in intra-abdominal pressure (IAP) measurements in aiming at standardizing the technique, evidence supporting this suggestion is scarce. The aim of this study is to study if the zero reference position influences bladder pressure measurements as estimate for IAP. Methods The IAP of 100 surgical patients was measured during the first 24 h of admission to the surgical intensive care unit of General Calixto Garcia Hospital in Havana (Cuba) following laparotomy. The period was January 2009 to January 2010. The IAP was measured twice with a six-hour interval using the transurethral technique with a priming volume of 25 ml. IAP was first measured with the zero reference level placed at MAL (IAPMAL), followed by a second measurement at the level of the symphysis pubis (SP) after 3 minutes (IAPSP). Correlations were made between IAP and body mass index (BMI), type of surgery, gender, and age. Results Mean IAPMAL was 8.5 ± 2.8 mmHg vs. IAPSP 6.5 ± 2.8 mmHg (p < 0.0001). The bias between measurements was 2.0 ± 1.5, 95% confidence interval of 1.4 to 3.0, upper limit of 4.9, lower limit of -0.9, and a percentage error of 35.1%. IAPMAL was consistently higher than IAPSP regardless of the type of surgery. The BMI correlated with IAP values regardless of the zero reference level (R2 = 0.4 and 0.3 with IAPMAL and IAPSP respectively, p < 0.0001). Conclusions The zero reference level has an important impact on IAP measurement in surgical patients after laparotomy and can potentially lead to over or underestimation. Further anthropometric studies are needed with regard to the relative MAL and SP zero reference position in relation to the theoretical ideal reference level at midpoint of the abdomen. Until better evidence is available, MAL remains the recommended zero reference position due to its best anatomical localization at iliac

  8. Culmination of the inverse cascade - mean flow and fluctuations

    NASA Astrophysics Data System (ADS)

    Frishman, Anna; Herbert, Corentin

    2017-11-01

    An inverse cascade-energy transfer to progressively larger scales - is a salient feature of two-dimensional turbulence. If the cascade reaches the system scale, it terminates in the self organization of the turbulence into a large scale coherent structure, on top of small scale fluctuations. A recent theoretical framework in which this coherent mean flow can be obtained will be discussed. Assuming that the quasi-linear approximation applies, the forcing acts at small scales, and a strong shear, the theory gives an inverse relation between the average momentum flux and the mean shear rate. It will be argued that this relation is quite general, being independent of the dissipation mechanism and largely insensitive to the type of forcing. Furthermore, in the special case of a homogeneous forcing, the relation between the momentum flux and mean shear rate is completely determined by dimensional analysis and symmetry arguments. The subject of the average energy of the fluctuations will also be touched upon, focusing on a vortex mean flow. In contrast to the momentum flux, we find that the energy of the fluctuations is determined by zero modes of the mean-flow advection operator. Using an analytic derivation for the zero mo.

  9. Determinants of systemic zero-flow arterial pressure.

    PubMed

    Brunner, M J; Greene, A S; Sagawa, K; Shoukas, A A

    1983-09-01

    Thirteen pentobarbital-anesthetized dogs whose carotid sinuses were isolated and perfused at a constant pressure were placed on total cardiac bypass. With systemic venous pressure held at 0 mmHg (condition 1), arterial inflow was stopped for 20 s at intrasinus pressures of 50, 125, and 200 mmHg. Zero-flow arterial pressures under condition 1 were 16.2 +/- 1.3 (SE), 13.8 +/- 1.1, and 12.5 +/- 0.8 mmHg, respectively. In condition 2, the venous outflow tube was clamped at the instant of stopping the inflow, causing venous pressure to rise. The zero-flow arterial pressures were 19.7 +/- 1.3, 18.5 +/- 1.4, and 16.4 +/- 1.2 mmHg for intrasinus pressures of 50, 125, and 200 mmHg, respectively. At all levels of intrasinus pressure, the zero-flow arterial pressure in condition 2 was higher (P less than 0.005) than in condition 1. In seven dogs, at an intrasinus pressure of 125 mmHg, epinephrine increased the zero-flow arterial pressure by 3.0 mmHg, whereas hexamethonium and papaverine decreased the zero-flow arterial pressure by 2 mmHg. Reductions in the hematocrit from 52 to 11% resulted in statistically significant changes (P less than 0.01) in zero-flow arterial pressures. Thus zero-flow arterial pressure was found to be affected by changes in venous pressure, hematocrit, and vasomotor tone. The evidence does not support the literally interpreted concept of the vascular waterfall as the model for the finite arteriovenous pressure difference at zero flow.

  10. Classifying next-generation sequencing data using a zero-inflated Poisson model.

    PubMed

    Zhou, Yan; Wan, Xiang; Zhang, Baoxue; Tong, Tiejun

    2018-04-15

    With the development of high-throughput techniques, RNA-sequencing (RNA-seq) is becoming increasingly popular as an alternative for gene expression analysis, such as RNAs profiling and classification. Identifying which type of diseases a new patient belongs to with RNA-seq data has been recognized as a vital problem in medical research. As RNA-seq data are discrete, statistical methods developed for classifying microarray data cannot be readily applied for RNA-seq data classification. Witten proposed a Poisson linear discriminant analysis (PLDA) to classify the RNA-seq data in 2011. Note, however, that the count datasets are frequently characterized by excess zeros in real RNA-seq or microRNA sequence data (i.e. when the sequence depth is not enough or small RNAs with the length of 18-30 nucleotides). Therefore, it is desired to develop a new model to analyze RNA-seq data with an excess of zeros. In this paper, we propose a Zero-Inflated Poisson Logistic Discriminant Analysis (ZIPLDA) for RNA-seq data with an excess of zeros. The new method assumes that the data are from a mixture of two distributions: one is a point mass at zero, and the other follows a Poisson distribution. We then consider a logistic relation between the probability of observing zeros and the mean of the genes and the sequencing depth in the model. Simulation studies show that the proposed method performs better than, or at least as well as, the existing methods in a wide range of settings. Two real datasets including a breast cancer RNA-seq dataset and a microRNA-seq dataset are also analyzed, and they coincide with the simulation results that our proposed method outperforms the existing competitors. The software is available at http://www.math.hkbu.edu.hk/∼tongt. xwan@comp.hkbu.edu.hk or tongt@hkbu.edu.hk. Supplementary data are available at Bioinformatics online.

  11. Broken Ergodicity in Ideal, Homogeneous, Incompressible Turbulence

    NASA Technical Reports Server (NTRS)

    Morin, Lee; Shebalin, John; Fu, Terry; Nguyen, Phu; Shum, Victor

    2010-01-01

    We discuss the statistical mechanics of numerical models of ideal homogeneous, incompressible turbulence and their relevance for dissipative fluids and magnetofluids. These numerical models are based on Fourier series and the relevant statistical theory predicts that Fourier coefficients of fluid velocity and magnetic fields (if present) are zero-mean random variables. However, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to the associated standard deviation. We explain this phenomena in terms of broken ergodicity', which is defined to occur when dynamical behavior does not match ensemble predictions on very long time-scales. We review the theoretical basis of broken ergodicity, apply it to 2-D and 3-D fluid and magnetohydrodynamic simulations of homogeneous turbulence, and show new results from simulations using GPU (graphical processing unit) computers.

  12. Zero-gravity quantity gaging system

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Zero-Gravity Quantity Gaging System program is a technology development effort funded by NASA-LeRC and contracted by NASA-JSC to develop and evaluate zero-gravity quantity gaging system concepts suitable for application to large, on-orbit cryogenic oxygen and hydrogen tankage. The contract effective date was 28 May 1985. During performance of the program, 18 potential quantity gaging approaches were investigated for their merit and suitability for gaging two-phase cryogenic oxygen and hydrogen in zero-gravity conditions. These approaches were subjected to a comprehensive trade study and selection process, which found that the RF modal quantity gaging approach was the most suitable for both liquid oxygen and liquid hydrogen applications. This selection was made with NASA-JSC concurrence.

  13. Benefits and shortcomings of superselective transarterial embolization of renal tumors before zero ischemia laparoscopic partial nephrectomy.

    PubMed

    D'Urso, L; Simone, G; Rosso, R; Collura, D; Castelli, E; Giacobbe, A; Muto, G L; Comelli, S; Savio, D; Muto, G

    2014-12-01

    To report feasibility, safety and effectiveness of "zero-ischemia" laparoscopic partial nephrectomy (LPN) following preoperative superselective transarterial embolization (STE) for clinical T1 renal tumors. We retrospectively reviewed perioperative data of 23 consecutive patients, who underwent STE prior LPN between March 2010 and November 2012 for incidental clinical T1 renal mass. STE was performed by two experienced radiologists the day before surgery. Surgical procedures were performed in extended flank position, transperitoneally, by a single surgeon. Mean patients age was 68 years (range 56-74), mean tumor size was 3.5 cm (range 2.2-6.3 cm). STE was successfully completed in 16 patients 12-15 h before surgery. In 4 cases STE failed to provide a complete occlusion of all feeding arteries, while in 3 cases the ischemic area was larger than expected. LPN was successfully completed in all patients but one where open conversion was necessary; a "zero-ischemia" approach was performed in 19/23 patients (82.6%) while hilar clamp was necessary in 4 cases, with a mean warm-ischemia time of 14.8 min (range 5-22). Mean operative time was 123 min (range 115-130) and mean intraoperative blood loss was 250 mL (range 20-450). No patient experienced postoperative acute renal failure and no patient developed new onset IV stage chronic kidney disease at 1-yr follow-up. STE is a viable option to perform "zero-ischemia" LPN at beginning of learning curve; however, hilar clamp was necessary to achieve a relatively blood-less field in 17.4% of cases. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Impact of including or excluding both-armed zero-event studies on using standard meta-analysis methods for rare event outcome: a simulation study

    PubMed Central

    Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana

    2016-01-01

    Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment

  15. Quantum random bit generation using energy fluctuations in stimulated Raman scattering.

    PubMed

    Bustard, Philip J; England, Duncan G; Nunn, Josh; Moffatt, Doug; Spanner, Michael; Lausten, Rune; Sussman, Benjamin J

    2013-12-02

    Random number sequences are a critical resource in modern information processing systems, with applications in cryptography, numerical simulation, and data sampling. We introduce a quantum random number generator based on the measurement of pulse energy quantum fluctuations in Stokes light generated by spontaneously-initiated stimulated Raman scattering. Bright Stokes pulse energy fluctuations up to five times the mean energy are measured with fast photodiodes and converted to unbiased random binary strings. Since the pulse energy is a continuous variable, multiple bits can be extracted from a single measurement. Our approach can be generalized to a wide range of Raman active materials; here we demonstrate a prototype using the optical phonon line in bulk diamond.

  16. Revenue Sufficiency and Reliability in a Zero Marginal Cost Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.

    Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less

  17. Sediment-transport experiments in zero-gravity

    NASA Technical Reports Server (NTRS)

    Iversen, James D.; Greeley, Ronald

    1987-01-01

    One of the important parameters in the analysis of sediment entrainment and transport is gravitational attraction. The availability of a laboratory in earth orbit would afford an opportunity to conduct experiments in zero and variable gravity environments. Elimination of gravitational attraction as a factor in such experiments would enable other critical parameters (such as particle cohesion and aerodynamic forces) to be evaluated much more accurately. A Carousel Wind Tunnel (CWT) is proposed for use in conducting experiments concerning sediment particle entrainment and transport in a space station. In order to test the concept of this wind tunnel design a one third scale model CWT was constructed and calibrated. Experiments were conducted in the prototype to determine the feasibility of studying various aeolian processes and the results were compared with various numerical analysis. Several types of experiments appear to be feasible utilizing the proposed apparatus.

  18. Sediment-transport experiments in zero-gravity

    NASA Technical Reports Server (NTRS)

    Iversen, J. D.; Greeley, R.

    1986-01-01

    One of the important parameters in the analysis of sediment entrainment and transport is gravitational attraction. The availability of a laboratory in Earth orbit would afford an opportunity to conduct experiments in zero and variable gravity environments. Elimination of gravitational attraction as a factor in such experiments would enable other critical parameters (such as particle cohesion and aerodynamic forces) to be evaluated much more accurately. A Carousel Wind Tunnel (CWT) is proposed for use in conducting experiments concerning sediment particle entrainment and transport in a space station. In order to test the concept of this wind tunnel design a one third scale model CWT was constructed and calibrated. Experiments were conducted in the prototype to determine the feasibility of studying various aeolian processes and the results were compared with various numerical analysis. Several types of experiments appear to be feasible utilizing the proposed apparatus.

  19. Unstable AMOC during glacial intervals and millennial variability: The role of mean sea ice extent

    NASA Astrophysics Data System (ADS)

    Sévellec, Florian; Fedorov, Alexey V.

    2015-11-01

    A striking feature of paleoclimate records is the greater stability of the Holocene epoch relative to the preceding glacial interval, especially apparent in the North Atlantic region. In particular, strong irregular variability with an approximately 1500 yr period, known as the Dansgaard-Oeschger (D-O) events, punctuates the last glaciation, but is absent during the interglacial. Prevailing theories, modeling and data suggest that these events, seen as abrupt warming episodes in Greenland ice cores and sea surface temperature records in the North Atlantic, are linked to reorganizations of the Atlantic Meridional Overturning Circulation (AMOC). In this study, using a new low-order ocean model that reproduces a realistic power spectrum of millennial variability, we explore differences in the AMOC stability between glacial and interglacial intervals of the 100 kyr glacial cycle of the Late Pleistocene (1 kyr = 1000 yr). Previous modeling studies show that the edge of sea ice in the North Atlantic shifts southward during glacial intervals, moving the region of the North Atlantic Deep Water formation and the AMOC also southward. Here we demonstrate that, by shifting the AMOC with respect to the mean atmospheric precipitation field, such a displacement makes the system unstable, which explains chaotic millennial variability during the glacials and the persistence of stable ocean conditions during the interglacials.

  20. Numerical analysis of right-half plane zeros for a single-link manipulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Girvin, Douglas Lynn

    1992-01-01

    The purpose of this research is to further develop an understanding of how nonminimum phase zero location is affected by structural link design. As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links become more of an issue in controls problems. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. The research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixes, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to inverse dynamic algorithms currently under development at Georgia Tech.

  1. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  2. Laser cooling of molecules by zero-velocity selection and single spontaneous emission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ooi, C. H. Raymond

    2010-11-15

    A laser-cooling scheme for molecules is presented based on repeated cycle of zero-velocity selection, deceleration, and irreversible accumulation. Although this scheme also employs a single spontaneous emission as in [Raymond Ooi, Marzlin, and Audretsch, Eur. Phys. J. D 22, 259 (2003)], in order to circumvent the difficulty of maintaining closed pumping cycles in molecules, there are two distinct features which make the cooling process of this scheme faster and more practical. First, the zero-velocity selection creates a narrow velocity-width population with zero mean velocity, such that no further deceleration (with many stimulated Raman adiabatic passage (STIRAP) pulses) is required. Second,more » only two STIRAP processes are required to decelerate the remaining hot molecular ensemble to create a finite population around zero velocity for the next cycle. We present a setup to realize the cooling process in one dimension with trapping in the other two dimensions using a Stark barrel. Numerical estimates of the cooling parameters and simulations with density matrix equations using OH molecules show the applicability of the cooling scheme. For a gas at temperature T=1 K, the estimated cooling time is only 2 ms, with phase-space density increased by about 30 times. The possibility of extension to three-dimensional cooling via thermalization is also discussed.« less

  3. Zero-point term and quantum effects in the Johnson noise of resistors: a critical appraisal

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes G.

    2016-05-01

    There is a longstanding debate about the zero-point term in the Johnson noise voltage of a resistor. This term originates from a quantum-theoretical treatment of the fluctuation-dissipation theorem (FDT). Is the zero-point term really there, or is it only an experimental artifact, due to the uncertainty principle, for phase-sensitive amplifiers? Could it be removed by renormalization of theories? We discuss some historical measurement schemes that do not lead to the effect predicted by the FDT, and we analyse new features that emerge when the consequences of the zero-point term are measured via the mean energy and force in a capacitor shunting the resistor. If these measurements verify the existence of a zero-point term in the noise, then two types of perpetual motion machines can be constructed. Further investigation with the same approach shows that, in the quantum limit, the Johnson-Nyquist formula is also invalid under general conditions even though it is valid for a resistor-antenna system. Therefore we conclude that in a satisfactory quantum theory of the Johnson noise, the FDT must, as a minimum, include also the measurement system used to evaluate the observed quantities. Issues concerning the zero-point term may also have implications for phenomena in advanced nanotechnology.

  4. Zero Benefit: Estimating the Effect of Zero Tolerance Discipline Polices on Racial Disparities in School Discipline

    ERIC Educational Resources Information Center

    Hoffman, Stephen

    2014-01-01

    This study estimates the effect of zero tolerance disciplinary policies on racial disparities in school discipline in an urban district. Capitalizing on a natural experiment, the abrupt expansion of zero tolerance discipline policies in a mid-sized urban school district, the study demonstrates that Black students in the district were…

  5. Pilot investigation - Nominal crew induced forces in zero-g

    NASA Technical Reports Server (NTRS)

    Klute, Glenn K.

    1992-01-01

    This report presents pilot-study data of test subject forces induced by intravehicular activities such as push-offs and landings with both hands and feet. Five subjects participated in this investigation. Three orthogonal force axes were measured in the NASA KC-135 research aircraft's 'zero-g' environment. The largest forces were induced during vertical foot push-offs, including one of 534 newtons (120 lbs). The mean vertical foot push-off was 311 newtons (70 lbs). The vertical hand push-off forces were also relatively large, including one of 267 newtons (60 lbs) with a mean of 151 newtons (34 lbs). These force magnitudes of these forces would result in a Shuttle gravity environment of about 1 x exp 10 -4 g's.

  6. The pH-dependent surface charging and points of zero charge: V. Update.

    PubMed

    Kosmulski, Marek

    2011-01-01

    The points of zero charge (PZC) and isoelectric points (IEP) from the recent literature are discussed. This study is an update of the previous compilation [M. Kosmulski, Surface Charging and Points of Zero Charge, CRC, Boca Raton, FL, 2009] and of its previous update [J. Colloid Interface Sci. 337 (2009) 439]. In several recent publications, the terms PZC/IEP have been used outside their usual meaning. Only the PZC/IEP obtained according to the methods recommended by the present author are reported in this paper, and the other results are ignored. PZC/IEP of albite, sepiolite, and sericite, which have not been studied before, became available over the past 2 years. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. [Modelling the effect of local climatic variability on dengue transmission in Medellin (Colombia) by means of time series analysis].

    PubMed

    Rúa-Uribe, Guillermo L; Suárez-Acosta, Carolina; Chauca, José; Ventosilla, Palmira; Almanza, Rita

    2013-09-01

    Dengue fever is a major impact on public health vector-borne disease, and its transmission is influenced by entomological, sociocultural and economic factors. Additionally, climate variability plays an important role in the transmission dynamics. A large scientific consensus has indicated that the strong association between climatic variables and disease could be used to develop models to explain the incidence of the disease. To develop a model that provides a better understanding of dengue transmission dynamics in Medellin and predicts increases in the incidence of the disease. The incidence of dengue fever was used as dependent variable, and weekly climatic factors (maximum, mean and minimum temperature, relative humidity and precipitation) as independent variables. Expert Modeler was used to develop a model to better explain the behavior of the disease. Climatic variables with significant association to the dependent variable were selected through ARIMA models. The model explains 34% of observed variability. Precipitation was the climatic variable showing statistically significant association with the incidence of dengue fever, but with a 20 weeks delay. In Medellin, the transmission of dengue fever was influenced by climate variability, especially precipitation. The strong association dengue fever/precipitation allowed the construction of a model to help understand dengue transmission dynamics. This information will be useful to develop appropriate and timely strategies for dengue control.

  8. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  9. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  10. Means, Variability and Trends of Precipitation in the Global Climate as Determined by the 25-year GEWEWGPCP Data Set

    NASA Technical Reports Server (NTRS)

    Adler, R. F.; Gu, G.; Curtis, S.; Huffman, G. J.

    2004-01-01

    The Global Precipitation Climatology Project (GPCP) 25-year precipitation data set is used as a basis to evaluate the mean state, variability and trends (or inter-decadal changes) of global and regional scales of precipitation. The uncertainties of these characteristics of the data set are evaluated by examination of other, parallel data sets and examination of shorter periods with higher quality data (e.g., TRMM). The global and regional means are assessed for uncertainty by comparing with other satellite and gauge data sets, both globally and regionally. The GPCP global mean of 2.6 mdday is divided into values of ocean and land and major latitude bands (Tropics, mid-latitudes, etc.). Seasonal variations globally and by region are shown and uncertainties estimated. The variability of precipitation year-to-year is shown to be related to ENS0 variations and volcanoes and is evaluated in relation to the overall lack of a significant global trend. The GPCP data set necessarily has a heterogeneous time series of input data sources, so part of the assessment described above is to test the initial results for potential influence by major data boundaries in the record.

  11. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  12. Zero-block mode decision algorithm for H.264/AVC.

    PubMed

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  13. Absolute Zero.

    ERIC Educational Resources Information Center

    Jones, Rebecca

    1997-01-01

    So far the courts have supported most schools' zero-tolerance policies--even those banning toy weapons, over-the-counter drugs, and unseemly conduct. However, wide-ranging get-tough policies can draw criticism. Policy experts advise school boards to ask the community, decide what people want, allow some wiggle room, create an appeals process,…

  14. Net Zero Ft. Carson: making a greener Army base

    EPA Science Inventory

    The US Army Net Zero program seeks to reduce the energy, water, and waste footprint of bases. Seventeen pilot bases aim to achieve 100% renewable energy, zero depletion of water resources, and/or zero waste to landfill by 2020. Some bases are pursuing Net Zero in a single secto...

  15. The Syntax of Zero in African American Relative Clauses

    ERIC Educational Resources Information Center

    Sistrunk, Walter

    2012-01-01

    African American relative clauses are distinct from Standard English relative clauses in allowing zero subject relatives and zero appositive relatives. Pesetsky and Torrego's (2003) (P&T) analysis of the subject-nonsubject asymmetry in relative clauses accounts for zero object relatives while restricting zero subject relatives. P&T…

  16. Small violations of Bell inequalities for multipartite pure random states

    NASA Astrophysics Data System (ADS)

    Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.

    2018-05-01

    For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.

  17. Criticality of the mean-field spin-boson model: boson state truncation and its scaling analysis

    NASA Astrophysics Data System (ADS)

    Hou, Y.-H.; Tong, N.-H.

    2010-11-01

    The spin-boson model has nontrivial quantum phase transitions at zero temperature induced by the spin-boson coupling. The bosonic numerical renormalization group (BNRG) study of the critical exponents β and δ of this model is hampered by the effects of boson Hilbert space truncation. Here we analyze the mean-field spin boson model to figure out the scaling behavior of magnetization under the cutoff of boson states N b . We find that the truncation is a strong relevant operator with respect to the Gaussian fixed point in 0 < s < 1/2 and incurs the deviation of the exponents from the classical values. The magnetization at zero bias near the critical point is described by a generalized homogeneous function (GHF) of two variables τ = α - α c and x = 1/ N b . The universal function has a double-power form and the powers are obtained analytically as well as numerically. Similarly, m( α = α c ) is found to be a GHF of γ and x. In the regime s > 1/2, the truncation produces no effect. Implications of these findings to the BNRG study are discussed.

  18. MODELING THE TIME VARIABILITY OF SDSS STRIPE 82 QUASARS AS A DAMPED RANDOM WALK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacLeod, C. L.; Ivezic, Z.; Bullock, E.

    2010-10-01

    We model the time variability of {approx}9000 spectroscopically confirmed quasars in SDSS Stripe 82 as a damped random walk (DRW). Using 2.7 million photometric measurements collected over 10 yr, we confirm the results of Kelly et al. and Kozlowski et al. that this model can explain quasar light curves at an impressive fidelity level (0.01-0.02 mag). The DRW model provides a simple, fast (O(N) for N data points), and powerful statistical description of quasar light curves by a characteristic timescale ({tau}) and an asymptotic rms variability on long timescales (SF{sub {infinity}}). We searched for correlations between these two variability parametersmore » and physical parameters such as luminosity and black hole mass, and rest-frame wavelength. Our analysis shows SF{sub {infinity}} to increase with decreasing luminosity and rest-frame wavelength as observed previously, and without a correlation with redshift. We find a correlation between SF{sub {infinity}} and black hole mass with a power-law index of 0.18 {+-} 0.03, independent of the anti-correlation with luminosity. We find that {tau} increases with increasing wavelength with a power-law index of 0.17, remains nearly constant with redshift and luminosity, and increases with increasing black hole mass with a power-law index of 0.21 {+-} 0.07. The amplitude of variability is anti-correlated with the Eddington ratio, which suggests a scenario where optical fluctuations are tied to variations in the accretion rate. However, we find an additional dependence on luminosity and/or black hole mass that cannot be explained by the trend with Eddington ratio. The radio-loudest quasars have systematically larger variability amplitudes by about 30%, when corrected for the other observed trends, while the distribution of their characteristic timescale is indistinguishable from that of the full sample. We do not detect any statistically robust differences in the characteristic timescale and variability amplitude between

  19. Mean deviation coupling synchronous control for multiple motors via second-order adaptive sliding mode control.

    PubMed

    Li, Lebao; Sun, Lingling; Zhang, Shengzhou

    2016-05-01

    A new mean deviation coupling synchronization control strategy is developed for multiple motor control systems, which can guarantee the synchronization performance of multiple motor control systems and reduce complexity of the control structure with the increasing number of motors. The mean deviation coupling synchronization control architecture combining second-order adaptive sliding mode control (SOASMC) approach is proposed, which can improve synchronization control precision of multiple motor control systems and make speed tracking errors, mean speed errors of each motor and speed synchronization errors converge to zero rapidly. The proposed control scheme is robustness to parameter variations and random external disturbances and can alleviate the chattering phenomena. Moreover, an adaptive law is employed to estimate the unknown bound of uncertainty, which is obtained in the sense of Lyapunov stability theorem to minimize the control effort. Performance comparisons with master-slave control, relative coupling control, ring coupling control, conventional PI control and SMC are investigated on a four-motor synchronization control system. Extensive comparative results are given to shown the good performance of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Tangent linear super-parameterization: attributable, decomposable moist processes for tropical variability studies

    NASA Astrophysics Data System (ADS)

    Mapes, B. E.; Kelly, P.; Song, S.; Hu, I. K.; Kuang, Z.

    2015-12-01

    An economical 10-layer global primitive equation solver is driven by time-independent forcing terms, derived from a training process, to produce a realisting eddying basic state with a tracer q trained to act like water vapor mixing ratio. Within this basic state, linearized anomaly moist physics in the column are applied in the form of a 20x20 matrix. The control matrix was derived from the results of Kuang (2010, 2012) who fitted a linear response function from a cloud resolving model in a state of deep convecting equilibrium. By editing this matrix in physical space and eigenspace, scaling and clipping its action, and optionally adding terms for processes that do not conserve moist statice energy (radiation, surface fluxes), we can decompose and explain the model's diverse moist process coupled variability. Recitified effects of this variability on the general circulation and climate, even in strictly zero-mean centered anomaly physic cases, also are sometimes surprising.

  1. Feasibility of zero or near zero fluoroscopy during catheter ablation procedures.

    PubMed

    Haegeli, Laurent M; Stutz, Linda; Mohsen, Mohammed; Wolber, Thomas; Brunckhorst, Corinna; On, Chol-Jun; Duru, Firat

    2018-04-03

    Awareness of risks associated with radiation exposure to patients and medical staff has significantly increased. It has been reported before that the use of advanced three-dimensional electro-anatomical mapping (EAM) system significantly reduces fluoroscopy time, however this study aimed for zero or near zero fluoroscopy ablation to assess its feasibility and safety in ablation of atrial fibrillation (AF) and other tachyarrhythmias in a "real world" experience of a single tertiary care center. This was a single-center study where ablation procedures were attempted without fluoroscopy in 34 consecutive patients with different tachyarrhythmias under the support of EAM system. When transseptal puncture (TSP) was needed, it was attempted under the guidance of intracardiac echocardiography (ICE). Among 34 patients consecutively enrolled in this study, 28 (82.4%) patients were referred for radiofrequency ablation (RFA) of AF, 3 (8.8%) patients for ablation of right ventricular outflow tract (RVOT) ventricular extrasystole (VES), 1 (2.9%) patient for ablation of atrioventricular nodal reentry tachycardia (AVNRT), 2 (5.9%) patients for typical atrial flutter ablation. In 21 (62%) patients the entire procedure was carried out without the use of fluoroscopy. Among 28 AF patients, 15 (54%) patients underwent ablation without the use of fluoroscopy and among these 15 patients, 10 (67%) patients required TSP under ICE guidance while 5 (33%) patients the catheters were introduced to left atrium through a patent foramen ovale. In 13 AF patients, fluoroscopy was only required for double TSP. The total procedure time of AF ablation was 130 ± 50 min. All patients referred for atrial flutter, AVNRT, and VES of the RVOT ablation did not require any fluoroscopy. This study demonstrates the feasibility of zero or near zero fluoroscopy procedure including TSP with the support of EAM and ICE guidance in a "real world" experience of a single tertiary care center. When fluoroscopy was

  2. Zero Energy Districts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polly, Benjamin J

    This presentation shows how NREL is approaching Zero Energy Districts, including key opportunities, design strategies, and master planning concepts. The presentation also covers URBANopt, an advanced analytical platform for district that is being developed by NREL.

  3. Work distributions for random sudden quantum quenches

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter

    2017-05-01

    The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.

  4. Unstable AMOC during glacial intervals and millennial variability: The role of mean sea ice extent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevellec, Florian; Fedorov, Alexey V.

    A striking feature of paleoclimate records is the greater stability of the Holocene epoch relative to the preceding glacial interval, especially apparent in the North Atlantic region. In particular, strong irregular variability with an approximately 1500 yr period, known as the Dansgaard-Oeschger (D-O) events, punctuates the last glaciation, but is absent during the interglacial. Prevailing theories, modeling and data suggest that these events, seen as abrupt warming episodes in Greenland ice cores and sea surface temperature records in the North Atlantic, are linked to reorganizations of the Atlantic Meridional Overturning Circulation (AMOC). In this study, using a new low-order oceanmore » model that reproduces a realistic power spectrum of millennial variability, we explore differences in the AMOC stability between glacial and interglacial intervals of the 100 kyr glacial cycle of the Late Pleistocene (1 kyr = 1000 yr). Previous modeling studies show that the edge of sea ice in the North Atlantic shifts southward during glacial intervals, moving the region of the North Atlantic Deep Water formation and the AMOC also southward. Finally, here we demonstrate that, by shifting the AMOC with respect to the mean atmospheric precipitation field, such a displacement makes the system unstable, which explains chaotic millennial variability during the glacials and the persistence of stable ocean conditions during the interglacials.« less

  5. Unstable AMOC during glacial intervals and millennial variability: The role of mean sea ice extent

    DOE PAGES

    Sevellec, Florian; Fedorov, Alexey V.

    2015-11-01

    A striking feature of paleoclimate records is the greater stability of the Holocene epoch relative to the preceding glacial interval, especially apparent in the North Atlantic region. In particular, strong irregular variability with an approximately 1500 yr period, known as the Dansgaard-Oeschger (D-O) events, punctuates the last glaciation, but is absent during the interglacial. Prevailing theories, modeling and data suggest that these events, seen as abrupt warming episodes in Greenland ice cores and sea surface temperature records in the North Atlantic, are linked to reorganizations of the Atlantic Meridional Overturning Circulation (AMOC). In this study, using a new low-order oceanmore » model that reproduces a realistic power spectrum of millennial variability, we explore differences in the AMOC stability between glacial and interglacial intervals of the 100 kyr glacial cycle of the Late Pleistocene (1 kyr = 1000 yr). Previous modeling studies show that the edge of sea ice in the North Atlantic shifts southward during glacial intervals, moving the region of the North Atlantic Deep Water formation and the AMOC also southward. Finally, here we demonstrate that, by shifting the AMOC with respect to the mean atmospheric precipitation field, such a displacement makes the system unstable, which explains chaotic millennial variability during the glacials and the persistence of stable ocean conditions during the interglacials.« less

  6. Majorana zero modes in superconductor-semiconductor heterostructures

    NASA Astrophysics Data System (ADS)

    Lutchyn, R. M.; Bakkers, E. P. A. M.; Kouwenhoven, L. P.; Krogstrup, P.; Marcus, C. M.; Oreg, Y.

    2018-05-01

    Realizing topological superconductivity and Majorana zero modes in the laboratory is a major goal in condensed-matter physics. In this Review, we survey the current status of this rapidly developing field, focusing on proposals for the realization of topological superconductivity in semiconductor-superconductor heterostructures. We examine materials science progress in growing InAs and InSb semiconductor nanowires and characterizing these systems. We then discuss the observation of robust signatures of Majorana zero modes in recent experiments, paying particular attention to zero-bias tunnelling conduction measurements and Coulomb blockade experiments. We also outline several next-generation experiments probing exotic properties of Majorana zero modes, including fusion rules and non-Abelian exchange statistics. Finally, we discuss prospects for implementing Majorana-based topological quantum computation.

  7. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  8. Zero leakage separable and semipermanent ducting joints

    NASA Technical Reports Server (NTRS)

    Mischel, H. T.

    1973-01-01

    A study program has been conducted to explore new methods of achieving zero leakage, separable and semipermanent, ducting joints for space flight vehicles. The study consisted of a search of literature of existing zero leakage methods, the generation of concepts of new methods of achieving the desired zero leakage criteria and the development of detailed analysis and design of a selected concept. Other techniques of leak detection were explored with a view toward improving this area.

  9. The probability of false positives in zero-dimensional analyses of one-dimensional kinematic, force and EMG trajectories.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2016-06-14

    A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Statistical auditing and randomness test of lotto k/N-type games

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.

    2008-11-01

    One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.

  11. Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.

    PubMed

    Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A

    2017-03-01

    Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.

  12. Three-dimensionally bonded spongy graphene material with super compressive elasticity and near-zero Poisson's ratio.

    PubMed

    Wu, Yingpeng; Yi, Ningbo; Huang, Lu; Zhang, Tengfei; Fang, Shaoli; Chang, Huicong; Li, Na; Oh, Jiyoung; Lee, Jae Ah; Kozlov, Mikhail; Chipara, Alin C; Terrones, Humberto; Xiao, Peishuang; Long, Guankui; Huang, Yi; Zhang, Fan; Zhang, Long; Lepró, Xavier; Haines, Carter; Lima, Márcio Dias; Lopez, Nestor Perea; Rajukumar, Lakshmy P; Elias, Ana L; Feng, Simin; Kim, Seon Jeong; Narayanan, N T; Ajayan, Pulickel M; Terrones, Mauricio; Aliev, Ali; Chu, Pengfei; Zhang, Zhong; Baughman, Ray H; Chen, Yongsheng

    2015-01-20

    It is a challenge to fabricate graphene bulk materials with properties arising from the nature of individual graphene sheets, and which assemble into monolithic three-dimensional structures. Here we report the scalable self-assembly of randomly oriented graphene sheets into additive-free, essentially homogenous graphene sponge materials that provide a combination of both cork-like and rubber-like properties. These graphene sponges, with densities similar to air, display Poisson's ratios in all directions that are near-zero and largely strain-independent during reversible compression to giant strains. And at the same time, they function as enthalpic rubbers, which can recover up to 98% compression in air and 90% in liquids, and operate between -196 and 900 °C. Furthermore, these sponges provide reversible liquid absorption for hundreds of cycles and then discharge it within seconds, while still providing an effective near-zero Poisson's ratio.

  13. Robust zero resistance in a superconducting high-entropy alloy at pressures up to 190 GPa

    PubMed Central

    Guo, Jing; Wang, Honghong; von Rohr, Fabian; Wang, Zhe; Cai, Shu; Zhou, Yazhou; Yang, Ke; Li, Aiguo; Jiang, Sheng; Wu, Qi; Cava, Robert J.; Sun, Liling

    2017-01-01

    We report the observation of extraordinarily robust zero-resistance superconductivity in the pressurized (TaNb)0.67(HfZrTi)0.33 high-entropy alloy––a material with a body-centered-cubic crystal structure made from five randomly distributed transition-metal elements. The transition to superconductivity (TC) increases from an initial temperature of 7.7 K at ambient pressure to 10 K at ∼60 GPa, and then slowly decreases to 9 K by 190.6 GPa, a pressure that falls within that of the outer core of the earth. We infer that the continuous existence of the zero-resistance superconductivity from 1 atm up to such a high pressure requires a special combination of electronic and mechanical characteristics. This high-entropy alloy superconductor thus may have a bright future for applications under extreme conditions, and also poses a challenge for understanding the underlying quantum physics. PMID:29183981

  14. Robust zero resistance in a superconducting high-entropy alloy at pressures up to 190 GPa

    NASA Astrophysics Data System (ADS)

    Guo, Jing; Wang, Honghong; von Rohr, Fabian; Wang, Zhe; Cai, Shu; Zhou, Yazhou; Yang, Ke; Li, Aiguo; Jiang, Sheng; Wu, Qi; Cava, Robert J.; Sun, Liling

    2017-12-01

    We report the observation of extraordinarily robust zero-resistance superconductivity in the pressurized (TaNb)0.67(HfZrTi)0.33 high-entropy alloy--a material with a body-centered-cubic crystal structure made from five randomly distributed transition-metal elements. The transition to superconductivity (TC) increases from an initial temperature of 7.7 K at ambient pressure to 10 K at ˜60 GPa, and then slowly decreases to 9 K by 190.6 GPa, a pressure that falls within that of the outer core of the earth. We infer that the continuous existence of the zero-resistance superconductivity from 1 atm up to such a high pressure requires a special combination of electronic and mechanical characteristics. This high-entropy alloy superconductor thus may have a bright future for applications under extreme conditions, and also poses a challenge for understanding the underlying quantum physics.

  15. Qualitatively Assessing Randomness in SVD Results

    NASA Astrophysics Data System (ADS)

    Lamb, K. W.; Miller, W. P.; Kalra, A.; Anderson, S.; Rodriguez, A.

    2012-12-01

    Singular Value Decomposition (SVD) is a powerful tool for identifying regions of significant co-variability between two spatially distributed datasets. SVD has been widely used in atmospheric research to define relationships between sea surface temperatures, geopotential height, wind, precipitation and streamflow data for myriad regions across the globe. A typical application for SVD is to identify leading climate drivers (as observed in the wind or pressure data) for a particular hydrologic response variable such as precipitation, streamflow, or soil moisture. One can also investigate the lagged relationship between a climate variable and the hydrologic response variable using SVD. When performing these studies it is important to limit the spatial bounds of the climate variable to reduce the chance of random co-variance relationships being identified. On the other hand, a climate region that is too small may ignore climate signals which have more than a statistical relationship to a hydrologic response variable. The proposed research seeks to identify a qualitative method of identifying random co-variability relationships between two data sets. The research identifies the heterogeneous correlation maps from several past results and compares these results with correlation maps produced using purely random and quasi-random climate data. The comparison identifies a methodology to determine if a particular region on a correlation map may be explained by a physical mechanism or is simply statistical chance.

  16. Zero-Time Renal Transplant Biopsies: A Comprehensive Review.

    PubMed

    Naesens, Maarten

    2016-07-01

    Zero-time kidney biopsies, obtained at time of transplantation, are performed in many transplant centers worldwide. Decisions on kidney discard, kidney allocation, and choice of peritransplant and posttransplant treatment are sometimes based on the histological information obtained from these biopsies. This comprehensive review evaluates the practical considerations of performing zero-time biopsies, the predictive performance of zero-time histology and composite histological scores, and the clinical utility of these biopsies. The predictive performance of individual histological lesions and of composite scores for posttransplant outcome is at best moderate. No single histological lesion or composite score is sufficiently robust to be included in algorithms for kidney discard. Dual kidney transplantation has been based on histological assessment of zero-time biopsies and improves outcome in individual patients, but the waitlist effects of this strategy remain obscure. Zero-time biopsies are valuable for clinical and translational research purposes, providing insight in risk factors for posttransplant events, and as baseline for comparison with posttransplant histology. The molecular phenotype of zero-time biopsies yields novel therapeutic targets for improvement of donor selection, peritransplant management and kidney preservation. It remains however highly unclear whether the molecular expression variation in zero-time biopsies could become a better predictor for posttransplant outcome than donor/recipient baseline demographic factors.

  17. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  18. Teacher Perceptions of Division by Zero

    ERIC Educational Resources Information Center

    Quinn, Robert J.; Lamberg, Teruni D.; Perrin, John R.

    2008-01-01

    Dividing by zero can be confusing for students at all levels. If teachers are to provide clear and understandable explanations of this topic, they must possess a strong conceptual understanding of it themselves. In this article, the authors present qualitative data on fourth- through eighth-grade teachers' perceptions of division by zero. They…

  19. Clinical comparison of Zero-profile interbody fusion device and anterior cervical plate interbody fusion in treating cervical spondylosis.

    PubMed

    Yan, Bin; Nie, Lin

    2015-01-01

    the aim of the study was to compare the clinical effect of Zero-profile interbody fusion device (Zero-P) with anterior cervical plate interbody fusion system (PCB) in treating cervical spondylosis. a total of 98 patients with cervical spondylosis (110 segments) in February 2011 to January 2013 were included in our hospital. All participants were randomly divided into observation group and control group with 49 cases in each group. The observation group was treated with Zero-P, while the control group received PCB treatment. Comparison of the two groups in neurological function score (JOA), pain visual analogue scale (VAS), the neck disability index (NDI), quality of life score (SF-36) and cervical curvature (Cobb angle) change were recorded and analyzed before and after treatment. The observation group was found with 90% excellent and good rate, which was higher than that of the control group (80%). Dysphagia rate in observational group was 16.33% (8/49), which was significantly less than that in control group (46.94%). Operation time and bleeding volume in the observation group was less than those in control group. Postoperative improvements of JOA score, VAS score, and NDI in observational group were also significantly better than that in control group (P<0.05). The clinical effect of Zero-P and PCB for the treatment of cervical spondylosis was quite fair, but Zero-P showed a better therapeutic effect with improvement of life quality.

  20. Variability of Antarctic Sea Ice 1979-1998

    NASA Technical Reports Server (NTRS)

    Zwally, H. Jay; Comiso, Josefino C.; Parkinson, Claire L.; Cavalieri, Donald J.; Gloersen, Per; Koblinsky, Chester J. (Technical Monitor)

    2001-01-01

    The principal characteristics of the variability of Antarctic sea ice cover as previously described from satellite passive-microwave observations are also evident in a systematically-calibrated and analyzed data set for 20.2 years (1979-1998). The total Antarctic sea ice extent (concentration > 15 %) increased by 13,440 +/- 4180 sq km/year (+1.18 +/- 0.37%/decade). The area of sea ice within the extent boundary increased by 16,960 +/- 3,840 sq km/year (+1.96 +/- 0.44%/decade). Regionally, the trends in extent are positive in the Weddell Sea (1.5 +/- 0.9%/decade), Pacific Ocean (2.4 +/- 1.4%/decade), and Ross (6.9 +/- 1.1 %/decade) sectors, slightly negative in the Indian Ocean (-1.5 +/- 1.8%/decade, and strongly negative in the Bellingshausen-Amundsen Seas sector (-9.5 +/- 1.5%/decade). For the entire ice pack, small ice increases occur in all seasons with the largest increase during autumn. On a regional basis, the trends differ season to season. During summer and fall, the trends are positive or near zero in all sectors except the Bellingshausen-Amundsen Seas sector. During winter and spring, the trends are negative or near zero in all sectors except the Ross Sea, which has positive trends in all seasons. Components of interannual variability with periods of about 3 to 5 years are regionally large, but tend to counterbalance each other in the total ice pack. The interannual variability of the annual mean sea-ice extent is only 1.6% overall, compared to 5% to 9% in each of five regional sectors. Analysis of the relation between regional sea ice extents and spatially-averaged surface temperatures over the ice pack gives an overall sensitivity between winter ice cover and temperature of -0.7% change in sea ice extent per K. For summer, some regional ice extents vary positively with temperature and others negatively. The observed increase in Antarctic sea ice cover is counter to the observed decreases in the Arctic. It is also qualitatively consistent with the