Sample records for zero-mean random variables

  1. A Simple Game to Derive Lognormal Distribution

    ERIC Educational Resources Information Center

    Omey, E.; Van Gulck, S.

    2007-01-01

    In the paper we present a simple game that students can play in the classroom. The game can be used to show that random variables can behave in an unexpected way: the expected mean can tend to zero or to infinity; the variance can tend to zero or to infinity. The game can also be used to introduce the lognormal distribution. (Contains 1 table and…

  2. Three-part joint modeling methods for complex functional data mixed with zero-and-one-inflated proportions and zero-inflated continuous outcomes with skewness.

    PubMed

    Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J

    2018-02-20

    We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in handling the ordinal and proportional variables are addressed using a quasi-likelihood type approximation. We develop an efficient algorithm to fit the model that also involves the selection of the number of principal components. The method is applied to physical activity data and is evaluated empirically by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates

    USGS Publications Warehouse

    Gray, B.R.

    2005-01-01

    The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively). However, the zero-modified Poisson models underestimated small counts (1 ??? y ??? 4) and overestimated intermediate counts (7 ??? y ??? 23). Counts greater than zero were estimated well by zero-modified negative binomial models, while counts greater than one were also estimated well by the standard negative binomial model. Based on AIC and percent zero estimation criteria, the two-stage and zero-inflated models performed similarly. The above inferences were largely confirmed when the models were used to predict values from a separate, evaluation data set (n = 110). An exception was that, using the evaluation data set, the standard negative binomial model appeared superior to its zero-modified counterparts using the AIC (but not percent zero criteria). This and other evidence suggest that a negative binomial distributional assumption should be routinely considered when modelling benthic macroinvertebrate data from low flow environments. Whether negative binomial models should themselves be routinely examined for extra zeroes requires, from a statistical perspective, more investigation. However, this question may best be answered by ecological arguments that may be specific to the sampled species and locations. ?? 2004 Elsevier B.V. All rights reserved.

  4. Synthesis of hover autopilots for rotary-wing VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hall, W. E.; Bryson, A. E., Jr.

    1972-01-01

    The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.

  5. Nonrecurrence and Bell-like inequalities

    NASA Astrophysics Data System (ADS)

    Danforth, Douglas G.

    2017-12-01

    The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.

  6. The living Drake equation of the Tau Zero Foundation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-03-01

    The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.

  7. Observability-based Local Path Planning and Collision Avoidance Using Bearing-only Measurements

    DTIC Science & Technology

    2012-01-20

    Clark N. Taylorb aDepartment of Electrical and Computer Engineering, Brigham Young University , Provo, Utah, 84602 bSensors Directorate, Air Force Research...NAME(S) AND ADDRESS(ES) Brigham Young University ,Department of Electrical and Computer Engineering,Provo,UT,84602 8. PERFORMING ORGANIZATION... vit is the measurement noise that is assumed to be a zero-mean Gaus- sian random variable. Based on the state transition model expressed by Eqs. (1

  8. Stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation.

    PubMed

    Wang, Deli; Xu, Wei; Zhao, Xiangrong

    2016-03-01

    This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.

  9. Broken Ergodicity in Ideal, Homogeneous, Incompressible Turbulence

    NASA Technical Reports Server (NTRS)

    Morin, Lee; Shebalin, John; Fu, Terry; Nguyen, Phu; Shum, Victor

    2010-01-01

    We discuss the statistical mechanics of numerical models of ideal homogeneous, incompressible turbulence and their relevance for dissipative fluids and magnetofluids. These numerical models are based on Fourier series and the relevant statistical theory predicts that Fourier coefficients of fluid velocity and magnetic fields (if present) are zero-mean random variables. However, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to the associated standard deviation. We explain this phenomena in terms of broken ergodicity', which is defined to occur when dynamical behavior does not match ensemble predictions on very long time-scales. We review the theoretical basis of broken ergodicity, apply it to 2-D and 3-D fluid and magnetohydrodynamic simulations of homogeneous turbulence, and show new results from simulations using GPU (graphical processing unit) computers.

  10. Evaluation of Kurtosis into the product of two normally distributed variables

    NASA Astrophysics Data System (ADS)

    Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio

    2016-06-01

    Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.

  11. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  12. Disease Mapping of Zero-excessive Mesothelioma Data in Flanders

    PubMed Central

    Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel

    2016-01-01

    Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590

  13. Disease mapping of zero-excessive mesothelioma data in Flanders.

    PubMed

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel

    2017-01-01

    To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  15. Subharmonic response of a single-degree-of-freedom nonlinear vibro-impact system to a narrow-band random excitation.

    PubMed

    Haiwu, Rong; Wang, Xiangdong; Xu, Wei; Fang, Tong

    2009-08-01

    The subharmonic response of single-degree-of-freedom nonlinear vibro-impact oscillator with a one-sided barrier to narrow-band random excitation is investigated. The narrow-band random excitation used here is a filtered Gaussian white noise. The analysis is based on a special Zhuravlev transformation, which reduces the system to one without impacts, or velocity jumps, thereby permitting the applications of asymptotic averaging over the "fast" variables. The averaged stochastic equations are solved exactly by the method of moments for the mean-square response amplitude for the case of linear system with zero offset. A perturbation-based moment closure scheme is proposed and the formula of the mean-square amplitude is obtained approximately for the case of linear system with nonzero offset. The perturbation-based moment closure scheme is used once again to obtain the algebra equation of the mean-square amplitude of the response for the case of nonlinear system. The effects of damping, detuning, nonlinear intensity, bandwidth, and magnitudes of random excitations are analyzed. The theoretical analyses are verified by numerical results. Theoretical analyses and numerical simulations show that the peak amplitudes may be strongly reduced at large detunings or large nonlinear intensity.

  16. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts.

    PubMed

    Preisser, John S; Long, D Leann; Stamm, John W

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.

  17. Distribution law of the Dirac eigenmodes in QCD

    NASA Astrophysics Data System (ADS)

    Catillo, Marco; Glozman, Leonid Ya.

    2018-04-01

    The near-zero modes of the Dirac operator are connected to spontaneous breaking of chiral symmetry in QCD (SBCS) via the Banks-Casher relation. At the same time, the distribution of the near-zero modes is well described by the Random Matrix Theory (RMT) with the Gaussian Unitary Ensemble (GUE). Then, it has become a standard lore that a randomness, as observed through distributions of the near-zero modes of the Dirac operator, is a consequence of SBCS. The higher-lying modes of the Dirac operator are not affected by SBCS and are sensitive to confinement physics and related SU(2)CS and SU(2NF) symmetries. We study the distribution of the near-zero and higher-lying eigenmodes of the overlap Dirac operator within NF = 2 dynamical simulations. We find that both the distributions of the near-zero and higher-lying modes are perfectly described by GUE of RMT. This means that randomness, while consistent with SBCS, is not a consequence of SBCS and is linked to the confining chromo-electric field.

  18. Mechanical properties of 3D printed warped membranes

    NASA Astrophysics Data System (ADS)

    Kosmrlj, Andrej; Xiao, Kechao; Weaver, James C.; Vlassak, Joost J.; Nelson, David R.

    2015-03-01

    We explore how a frozen background metric affects the mechanical properties of solid planar membranes. Our focus is a special class of ``warped membranes'' with a preferred random height profile characterized by random Gaussian variables h (q) in Fourier space with zero mean and variance < | h (q) | 2 > q-m . It has been shown theoretically that in the linear response regime, this quenched random disorder increases the effective bending rigidity, while the Young's and shear moduli are reduced. Compared to flat plates of the same thickness t, the bending rigidity of warped membranes is increased by a factor hv / t while the in-plane elastic moduli are reduced by t /hv , where hv =√{< | h (x) | 2 > } describes the frozen height fluctuations. Interestingly, hv is system size dependent for warped membranes characterized with m > 2 . We present experimental tests of these predictions, using warped membranes prepared via high resolution 3D printing.

  19. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  20. Random phase approximation and cluster mean field studies of hard core Bose Hubbard model

    NASA Astrophysics Data System (ADS)

    Alavani, Bhargav K.; Gaude, Pallavi P.; Pai, Ramesh V.

    2018-04-01

    We investigate zero temperature and finite temperature properties of the Bose Hubbard Model in the hard core limit using Random Phase Approximation (RPA) and Cluster Mean Field Theory (CMFT). We show that our RPA calculations are able to capture quantum and thermal fluctuations significantly better than CMFT.

  1. Multiple imputation in the presence of non-normal data.

    PubMed

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Remote sensing of earth terrain

    NASA Technical Reports Server (NTRS)

    Kong, J. A.

    1988-01-01

    Two monographs and 85 journal and conference papers on remote sensing of earth terrain have been published, sponsored by NASA Contract NAG5-270. A multivariate K-distribution is proposed to model the statistics of fully polarimetric data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean bar N. Subsequently, and n-dimensional K-distribution, with either zero or non-zero mean, is developed in the limit of infinite bar N or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector. The above model is well supported by experimental data provided by MIT Lincoln Laboratory and the Jet Propulsion Laboratory in the form of polarimetric measurements.

  3. Broken Ergodicity in MHD Turbulence in a Spherical Domain

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.; wang, Yifan

    2011-01-01

    Broken ergodicity (BE) occurs in Fourier method numerical simulations of ideal, homogeneous, incompressible magnetohydrodynamic (MHD) turbulence. Although naive statistical theory predicts that Fourier coefficients of fluid velocity and magnetic field are zero-mean random variables, numerical simulations clearly show that low-wave-number coefficients have non-zero mean values that can be very large compared to the associated standard deviation. In other words, large-scale coherent structure (i.e., broken ergodicity) in homogeneous MHD turbulence can spontaneously grow out of random initial conditions. Eigenanalysis of the modal covariance matrices in the probability density functions of ideal statistical theory leads to a theoretical explanation of observed BE in homogeneous MHD turbulence. Since dissipation is minimal at the largest scales, BE is also relevant for resistive magnetofluids, as evidenced in numerical simulations. Here, we move beyond model magnetofluids confined by periodic boxes to examine BE in rotating magnetofluids in spherical domains using spherical harmonic expansions along with suitable boundary conditions. We present theoretical results for 3-D and 2-D spherical models and also present computational results from dynamical simulations of 2-D MHD turbulence on a rotating spherical surface. MHD turbulence on a 2-D sphere is affected by Coriolus forces, while MHD turbulence on a 2-D plane is not, so that 2-D spherical models are a useful (and simpler) intermediate stage on the path to understanding the much more complex 3-D spherical case.

  4. Probabilistic SSME blades structural response under random pulse loading

    NASA Technical Reports Server (NTRS)

    Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.

    1987-01-01

    The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.

  5. Spatial vs. individual variability with inheritance in a stochastic Lotka-Volterra system

    NASA Astrophysics Data System (ADS)

    Dobramysl, Ulrich; Tauber, Uwe C.

    2012-02-01

    We investigate a stochastic spatial Lotka-Volterra predator-prey model with randomized interaction rates that are either affixed to the lattice sites and quenched, and / or specific to individuals in either population. In the latter situation, we include rate inheritance with mutations from the particles' progenitors. Thus we arrive at a simple model for competitive evolution with environmental variability and selection pressure. We employ Monte Carlo simulations in zero and two dimensions to study the time evolution of both species' densities and their interaction rate distributions. The predator and prey concentrations in the ensuing steady states depend crucially on the environmental variability, whereas the temporal evolution of the individualized rate distributions leads to largely neutral optimization. Contrary to, e.g., linear gene expression models, this system does not experience fixation at extreme values. An approximate description of the resulting data is achieved by means of an effective master equation approach for the interaction rate distribution.

  6. A Hedonic Approach to Estimating Software Cost Using Ordinary Least Squares Regression and Nominal Attribute Variables

    DTIC Science & Technology

    2006-03-01

    included zero, there is insufficient evidence to indicate that the error mean is 35 not zero. The Breusch - Pagan test was used to test the constant...Multicollinearity .............................................................................. 33 Testing OLS Assumptions...programming styles used by developers (Stamelos and others, 2003:733). Kemerer tested to see how models utilizing SLOC as an independent variable

  7. Prediction of Short-Distance Aerial Movement of Phakopsora pachyrhizi Urediniospores Using Machine Learning.

    PubMed

    Wen, L; Bowen, C R; Hartman, G L

    2017-10-01

    Dispersal of urediniospores by wind is the primary means of spread for Phakopsora pachyrhizi, the cause of soybean rust. Our research focused on the short-distance movement of urediniospores from within the soybean canopy and up to 61 m from field-grown rust-infected soybean plants. Environmental variables were used to develop and compare models including the least absolute shrinkage and selection operator regression, zero-inflated Poisson/regular Poisson regression, random forest, and neural network to describe deposition of urediniospores collected in passive and active traps. All four models identified distance of trap from source, humidity, temperature, wind direction, and wind speed as the five most important variables influencing short-distance movement of urediniospores. The random forest model provided the best predictions, explaining 76.1 and 86.8% of the total variation in the passive- and active-trap datasets, respectively. The prediction accuracy based on the correlation coefficient (r) between predicted values and the true values were 0.83 (P < 0.0001) and 0.94 (P < 0.0001) for the passive and active trap datasets, respectively. Overall, multiple machine learning techniques identified the most important variables to make the most accurate predictions of movement of P. pachyrhizi urediniospores short-distance.

  8. Meaner king uses biased bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimpell, Michael; Werner, Reinhard F.

    2007-06-15

    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement made in one of several different orthonormal bases. Alice is allowed to prepare the state of the system and to do a final measurement, possibly including an entangled copy. However, Alice gains knowledge about which basis was measured only after she no longer has access to the quantum system or its copy. We give a necessary and sufficient condition on the bases, for Alice to have a strategy to solve this problem, without assuming that the bases aremore » mutually unbiased. The condition requires the existence of an overall joint probability distribution for random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, in the qubit case the problem is decided by Bell's original three variable inequality. In the standard setting of mutually unbiased bases, when they do exist, Alice can always succeed. However, for randomly chosen bases her success probability rapidly goes to zero with increasing dimension.« less

  9. Meaner king uses biased bases

    NASA Astrophysics Data System (ADS)

    Reimpell, Michael; Werner, Reinhard F.

    2007-06-01

    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement made in one of several different orthonormal bases. Alice is allowed to prepare the state of the system and to do a final measurement, possibly including an entangled copy. However, Alice gains knowledge about which basis was measured only after she no longer has access to the quantum system or its copy. We give a necessary and sufficient condition on the bases, for Alice to have a strategy to solve this problem, without assuming that the bases are mutually unbiased. The condition requires the existence of an overall joint probability distribution for random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, in the qubit case the problem is decided by Bell’s original three variable inequality. In the standard setting of mutually unbiased bases, when they do exist, Alice can always succeed. However, for randomly chosen bases her success probability rapidly goes to zero with increasing dimension.

  10. Limits on relief through constrained exchange on random graphs

    NASA Astrophysics Data System (ADS)

    LaViolette, Randall A.; Ellebracht, Lory A.; Gieseler, Charles J.

    2007-09-01

    Agents are represented by nodes on a random graph (e.g., “small world”). Each agent is endowed with a zero-mean random value that may be either positive or negative. All agents attempt to find relief, i.e., to reduce the magnitude of that initial value, to zero if possible, through exchanges. The exchange occurs only between the agents that are linked, a constraint that turns out to dominate the results. The exchange process continues until Pareto equilibrium is achieved. Only 40-90% of the agents achieved relief on small-world graphs with mean degree between 2 and 40. Even fewer agents achieved relief on scale-free-like graphs with a truncated power-law degree distribution. The rate at which relief grew with increasing degree was slow, only at most logarithmic for all of the graphs considered; viewed in reverse, the fraction of nodes that achieve relief is resilient to the removal of links.

  11. H∞ filtering for discrete-time systems subject to stochastic missing measurements: a decomposition approach

    NASA Astrophysics Data System (ADS)

    Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang

    2014-07-01

    This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.

  12. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  13. On the mean and variance of the writhe of random polygons.

    PubMed

    Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.

  14. On the mean and variance of the writhe of random polygons

    PubMed Central

    Portillo, J.; Diao, Y.; Scharein, R.; Arsuaga, J.; Vazquez, M.

    2013-01-01

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an “ideal” conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n) behaves as a linear function of the length of the equilateral random polygon. PMID:25685182

  15. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  16. Non-zero mean and asymmetry of neuronal oscillations have different implications for evoked responses.

    PubMed

    Nikulin, Vadim V; Linkenkaer-Hansen, Klaus; Nolte, Guido; Curio, Gabriel

    2010-02-01

    The aim of the present study was to show analytically and with simulations that it is the non-zero mean of neuronal oscillations, and not an amplitude asymmetry of peaks and troughs, that is a prerequisite for the generation of evoked responses through a mechanism of amplitude modulation of oscillations. Secondly, we detail the rationale and implementation of the "baseline-shift index" (BSI) for deducing whether empirical oscillations have non-zero mean. Finally, we illustrate with empirical data why the "amplitude fluctuation asymmetry" (AFA) index should be used with caution in research aimed at explaining variability in evoked responses through a mechanism of amplitude modulation of ongoing oscillations. An analytical approach, simulations and empirical MEG data were used to compare the specificity of BSI and AFA index to differentiate between a non-zero mean and a non-sinusoidal shape of neuronal oscillations. Both the BSI and the AFA index were sensitive to the presence of non-zero mean in neuronal oscillations. The AFA index, however, was also sensitive to the shape of oscillations even when they had a zero mean. Our findings indicate that it is the non-zero mean of neuronal oscillations, and not an amplitude asymmetry of peaks and troughs, that is a prerequisite for the generation of evoked responses through a mechanism of amplitude modulation of oscillations. A clear distinction should be made between the shape and non-zero mean properties of neuronal oscillations. This is because only the latter contributes to evoked responses, whereas the former does not. Copyright (c) 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Analyzing Propensity Matched Zero-Inflated Count Outcomes in Observational Studies

    PubMed Central

    DeSantis, Stacia M.; Lazaridis, Christos; Ji, Shuang; Spinale, Francis G.

    2013-01-01

    Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching, and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set. PMID:24298197

  18. Variable- and Person-Centered Approaches to the Analysis of Early Adolescent Substance Use: Linking Peer, Family, and Intervention Effects with Developmental Trajectories

    ERIC Educational Resources Information Center

    Connell, Arin M.; Dishion, Thomas J.; Deater-Deckard, Kirby

    2006-01-01

    This 4-year study of 698 young adolescents examined the covariates of early onset substance use from Grade 6 through Grade 9. The youth were randomly assigned to a family-centered Adolescent Transitions Program (ATP) condition. Variable-centered (zero-inflated Poisson growth model) and person-centered (latent growth mixture model) approaches were…

  19. Structural zeroes and zero-inflated models.

    PubMed

    He, Hua; Tang, Wan; Wang, Wenjuan; Crits-Christoph, Paul

    2014-08-01

    In psychosocial and behavioral studies count outcomes recording the frequencies of the occurrence of some health or behavior outcomes (such as the number of unprotected sexual behaviors during a period of time) often contain a preponderance of zeroes because of the presence of 'structural zeroes' that occur when some subjects are not at risk for the behavior of interest. Unlike random zeroes (responses that can be greater than zero, but are zero due to sampling variability), structural zeroes are usually very different, both statistically and clinically. False interpretations of results and study findings may result if differences in the two types of zeroes are ignored. However, in practice, the status of the structural zeroes is often not observed and this latent nature complicates the data analysis. In this article, we focus on one model, the zero-inflated Poisson (ZIP) regression model that is commonly used to address zero-inflated data. We first give a brief overview of the issues of structural zeroes and the ZIP model. We then given an illustration of ZIP with data from a study on HIV-risk sexual behaviors among adolescent girls. Sample codes in SAS and Stata are also included to help perform and explain ZIP analyses.

  20. Improving Learning in Primary Schools of Developing Countries: A Meta-Analysis of Randomized Experiments

    ERIC Educational Resources Information Center

    McEwan, Patrick J.

    2015-01-01

    I gathered 77 randomized experiments (with 111 treatment arms) that evaluated the effects of school-based interventions on learning in developing-country primary schools. On average, monetary grants and deworming treatments had mean effect sizes that were close to zero and not statistically significant. Nutritional treatments, treatments that…

  1. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts

    PubMed Central

    Preisser, John S.; Long, D. Leann; Stamm, John W.

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962

  2. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A multi-assets artificial stock market with zero-intelligence traders

    NASA Astrophysics Data System (ADS)

    Ponta, L.; Raberto, M.; Cincotti, S.

    2011-01-01

    In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.

  4. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  5. Logistic regression for dichotomized counts.

    PubMed

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  6. Proposed Interoperability Readiness Level Assessment for Mission Critical Interfaces During Navy Acquisition

    DTIC Science & Technology

    2010-12-01

    This involves zeroing and recreating the interoperability arrays and other variables used in the simulation. Since the constants do not change from run......Using this algorithm, the process of encrypting/decrypting data requires very little computation, and the generation of the random pads can be

  7. Casimir rack and pinion as a miniaturized kinetic energy harvester

    NASA Astrophysics Data System (ADS)

    Miri, MirFaez; Etesami, Zahra

    2016-08-01

    We study a nanoscale machine composed of a rack and a pinion with no contact, but intermeshed via the lateral Casimir force. We adopt a simple model for the random velocity of the rack subject to external random forces, namely, a dichotomous noise with zero mean value. We show that the pinion, even when it experiences random thermal torque, can do work against a load. The device thus converts the kinetic energy of the random motions of the rack into useful work.

  8. Clinical trial of a novel surface cooling system for fever control in neurocritical care patients.

    PubMed

    Mayer, Stephan A; Kowalski, Robert G; Presciutti, Mary; Ostapkovich, Noeleen D; McGann, Elaine; Fitzsimmons, Brian-Fred; Yavagal, Dileep R; Du, Y Evelyn; Naidech, Andrew M; Janjua, Nazli A; Claassen, Jan; Kreiter, Kurt T; Parra, Augusto; Commichau, Christopher

    2004-12-01

    To compare the efficacy of a novel water-circulating surface cooling system with conventional measures for treating fever in neuro-intensive care unit patients. Prospective, unblinded, randomized controlled trial. Neurologic intensive care unit in an urban teaching hospital. Forty-seven patients, the majority of whom were mechanically ventilated and sedated, with fever > or =38.3 degrees C for >2 consecutive hours after receiving 650 mg of acetaminophen. Subjects were randomly assigned to 24 hrs of treatment with a conventional water-circulating cooling blanket placed over the patient (Cincinnati SubZero, Cincinnati OH) or the Arctic Sun Temperature Management System (Medivance, Louisville CO), which employs hydrogel-coated water-circulating energy transfer pads applied directly to the trunk and thighs. Diagnoses included subarachnoid hemorrhage (60%), cerebral infarction (23%), intracerebral hemorrhage (11%), and traumatic brain injury (4%). The groups were matched in terms of baseline variables, although mean temperature was slightly higher at baseline in the Arctic Sun group (38.8 vs. 38.3 degrees C, p = .046). Compared with patients treated with the SubZero blanket (n = 24), Arctic Sun-treated patients (n = 23) experienced a 75% reduction in fever burden (median 4.1 vs. 16.1 C degrees -hrs, p = .001). Arctic Sun-treated patients also spent less percent time febrile (T > or =38.3 degrees C, 8% vs. 42%, p < .001), spent more percent time normothermic (T < or =37.2 degrees C, 59% vs. 3%, p < .001), and attained normothermia faster than the SubZero group median (2.4 vs. 8.9 hrs, p = .008). Shivering occurred more frequently in the Arctic Sun group (39% vs. 8%, p = .013). The Arctic Sun Temperature Management System is superior to conventional cooling-blanket therapy for controlling fever in critically ill neurologic patients.

  9. Two-Part and Related Regression Models for Longitudinal Data

    PubMed Central

    Farewell, V.T.; Long, D.L.; Tom, B.D.M.; Yiu, S.; Su, L.

    2017-01-01

    Statistical models that involve a two-part mixture distribution are applicable in a variety of situations. Frequently, the two parts are a model for the binary response variable and a model for the outcome variable that is conditioned on the binary response. Two common examples are zero-inflated or hurdle models for count data and two-part models for semicontinuous data. Recently, there has been particular interest in the use of these models for the analysis of repeated measures of an outcome variable over time. The aim of this review is to consider motivations for the use of such models in this context and to highlight the central issues that arise with their use. We examine two-part models for semicontinuous and zero-heavy count data, and we also consider models for count data with a two-part random effects distribution. PMID:28890906

  10. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  12. Marginalized multilevel hurdle and zero-inflated models for overdispersed and correlated count data with excess zeros.

    PubMed

    Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert

    2014-11-10

    Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is studied in the second example. Sub-models, which result from omitting zero-inflation and/or overdispersion features, are also considered for comparison's purpose. Analysis of the two datasets showed that accounting for the correlation, overdispersion, and excess zeros simultaneously resulted in a better fit to the data and, more importantly, that omission of any of them leads to incorrect marginal inference and erroneous conclusions about covariate effects. Copyright © 2014 John Wiley & Sons, Ltd.

  13. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  14. Binomial leap methods for simulating stochastic chemical kinetics.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2004-12-01

    This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.

  15. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  16. Analog model for quantum gravity effects: phonons in random fluids.

    PubMed

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  17. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  18. A prospective randomized trial on the use of Coca-Cola Zero(®) vs water for polyethylene glycol bowel preparation before colonoscopy.

    PubMed

    Seow-En, I; Seow-Choen, F

    2016-07-01

    The study aimed to determine whether Coca-Cola (Coke) Zero is a safe and effective solvent for polyethylene glycol (PEG). Between December 2013 and April 2014, 209 healthy adults (115 men, 95 women) scheduled for elective colonoscopy were randomized to use either Coke Zero (n = 100) or drinking water (n = 109) with PEG as bowel preparation. Each patient received two sachets of PEG to dissolve in 2 l of solvent, to be completed 6 h before colonoscopy. Serum electrolytes were measured before and after preparation. Bowel cleanliness and colonoscopy findings were recorded. Palatability of solution, adverse effects, time taken to complete and willingness to repeat the preparation were documented via questionnaire. Mean palatability scores in the Coke Zero group were significantly better compared with the control group (2.31 ± 0.61 vs 2.51 ± 0.63, P = 0.019), with a higher proportion willing to use the same preparation again (55% vs 43%). The mean time taken to complete the PEG + Coke Zero solution was significantly faster (74 ± 29 min vs 86 ± 31 min, P = 0.0035). The quality of bowel cleansing was also significantly better in the Coke Zero group (P = 0.0297). There was no difference in the frequency of adverse events (P = 0.759) or the polyp detection rate (32% vs 31.2%). Consumption of either preparation did not significantly affect electrolyte levels or hydration status. Coke Zero is a useful alternative solvent for PEG. It is well tolerated, more palatable, leads to quicker consumption of the bowel preparation and results in better quality cleansing. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.

  19. Screening actuator locations for static shape control

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1990-01-01

    Correction of shape distortion due to zero-mean normally distributed errors in structural sizes which are random variables is examined. A bound on the maximum improvement in the expected value of the root-mean-square shape error is obtained. The shape correction associated with the optimal actuators is also characterized. An actuator effectiveness index is developed and shown to be helpful in screening actuator locations in the structure. The results are specialized to a simple form for truss structures composed of nominally identical members. The bound and effectiveness index are tested on a 55-m radiometer antenna truss structure. It is found that previously obtained results for optimum actuators had a performance close to the bound obtained here. Furthermore, the actuators associated with the optimum design are shown to have high effectiveness indices. Since only a small fraction of truss elements tend to have high effectiveness indices, the proposed screening procedure can greatly reduce the number of truss members that need to be considered as actuator sites.

  20. Testing the Hypothesis of a Homoscedastic Error Term in Simple, Nonparametric Regression

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    2006-01-01

    Consider the nonparametric regression model Y = m(X)+ [tau](X)[epsilon], where X and [epsilon] are independent random variables, [epsilon] has a median of zero and variance [sigma][squared], [tau] is some unknown function used to model heteroscedasticity, and m(X) is an unknown function reflecting some conditional measure of location associated…

  1. Effect of food on the pharmacokinetics of dronabinol oral solution versus dronabinol capsules in healthy volunteers

    PubMed Central

    Oh, D Alexander; Parikh, Neha; Khurana, Varun; Cognata Smith, Christina; Vetticaden, Santosh

    2017-01-01

    Dronabinol is a pharmaceutical tetrahydrocannabinol originally developed as an oral capsule. A dronabinol oral solution was recently approved, and the effects of food on absorption and bioavailability of the oral solution versus capsules were compared in an open-label, single-dose, 3-period crossover study. Healthy volunteers were randomized to either dronabinol oral solution 4.25 mg (fed) or dronabinol capsule 5 mg (fed or fasted). Dosing was separated by a 7-day washout period. Plasma pharmacokinetics were evaluated for dronabinol and its major metabolite, 11-hydroxy-delta-9-tetrahydrocannabinol (11-OH-Δ9-THC). Pharmacokinetic data were available for analysis in 54 volunteers. In the fed state, initial dronabinol absorption was faster with oral solution versus capsule (mean time to the first measurable concentration, 0.15 vs 2.02 hours, respectively), with 100% and 15% of volunteers, respectively, having detectable plasma dronabinol levels 30 minutes postdose. There was less interindividual variability in plasma dronabinol concentration during early absorption with oral solution versus capsule. Compared with the fasted state, mean area under the plasma concentration–time curve from time zero to the last measurable concentration (AUC0−t) increased by 2.1- and 2.4-fold for dronabinol oral solution and capsule, respectively, when taken with food. Mean time to maximum plasma concentration was similarly delayed for dronabinol oral solution with food (7.7 hours) and capsule with food (5.6 hours) versus capsule with fasting (1.7 hours). Under fed conditions, AUC0−t and area under the plasma concentration–time curve from time zero to infinity were similar for the oral solution versus capsule based on 11-OH-Δ9-THC levels. An appreciable food effect was observed for dronabinol oral solution and capsules. Dronabinol oral solution may offer therapeutic benefit to patients, given its rapid and lower interindividual absorption variability versus dronabinol capsule. PMID:28138268

  2. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    NASA Astrophysics Data System (ADS)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  3. TIMING APPARATUS

    DOEpatents

    Bennett, A.E.; Geisow, J.C.H.

    1956-04-17

    The timing device comprises an escapement wheel and pallet, a spring drive to rotate the escapement wheel to a zero position, means to wind the pretensioned spring proportional to the desired signal time, and a cam mechanism to control an electrical signal switch by energizing the switch when the spring has been wound to the desired position, and deenergizing it when it reaches the zero position. This device produces an accurately timed signal variably witain the control of the operator.

  4. Estimation of correlation functions by stochastic approximation.

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Wintz, P. A.

    1972-01-01

    Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.

  5. Random matrices and condensation into multiple states

    NASA Astrophysics Data System (ADS)

    Sadeghi, Sina; Engel, Andreas

    2018-03-01

    In the present work, we employ methods from statistical mechanics of disordered systems to investigate static properties of condensation into multiple states in a general framework. We aim at showing how typical properties of random interaction matrices play a vital role in manifesting the statistics of condensate states. In particular, an analytical expression for the fraction of condensate states in the thermodynamic limit is provided that confirms the result of the mean number of coexisting species in a random tournament game. We also study the interplay between the condensation problem and zero-sum games with correlated random payoff matrices.

  6. Evaluation of dexamethasone phosphate delivered by ocular iontophoresis for treating noninfectious anterior uveitis.

    PubMed

    Cohen, Amy E; Assang, Carol; Patane, Michael A; From, Stephen; Korenfeld, Michael

    2012-01-01

    Determine safe, effective, iontophoretic dose(s) of EGP-437 (dexamethasone phosphate formulated for iontophoresis) in patients with noninfectious anterior uveitis; evaluate systemic drug exposures. Prospective, phase I/II, multicenter, double-masked, parallel group, randomized clinical trial. Forty outpatients with anterior uveitis. Forty of 42 randomized patients received an iontophoresis treatment in 1 qualifying eye and completed the study. Patients were randomized into 1 of 4 iontophoresis dose groups (1.6, 4.8, 10.0, or 14.0 mA-min), treated with EGP-437 via the EyeGate II Delivery System (EGDS), and followed until day 28. The main outcome measures were anterior chamber cell (ACC) scores at days 14 and 28; time to ACC score of zero; proportion of patients with an ACC score reduction from baseline of ≥ 0.5 at day 28; mean change from baseline in ACC score at day 28; and the systemic exposures of dexamethasone and dexamethasone phosphate after EGP-437 treatment with the EGDS. After a single EGP-437 treatment, 19 of 40 patients (48%) achieved an ACC score of zero at day 14. By day 28, 24 of 40 patients (60%) achieved an ACC score of zero. A Kaplan-Meier analysis demonstrated that the 1.6 mA-min dose was the most effective and revealed an inverse dose response; median days to an ACC score of zero were 11.5 days in the 1.6 mA-min group versus 31 days in the 14.0 mA-min group. Twenty-six patients (65%) had an ACC score reduction from baseline of ≥ 0.5 at day 28. The mean change in ACC score from baseline to day 28 was -2.14 with a median of -2.00. Throughout the study, the mean intraocular pressure remained within normal range and mean best-corrected visual acuity at 4 meters remained relatively stable. Most adverse events were mild; no serious adverse events were reported. Pharmacokinetics results showed low short-term systemic exposure to dexamethasone after iontophoresis; no nonocular systemic corticosteroid-mediated effects were observed. Approximately two thirds of the patients reached an ACC score of zero within 28 days, after only receiving 1 iontophoresis treatment. The lower doses seemed to be the most effective, and treatments were well-tolerated. Proprietary or commercial disclosure may be found after the references. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  7. West-WRF Sensitivity to Sea Surface Temperature Boundary Condition in California Precipitation Forecasts of AR Related Events

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Cornuelle, B. D.; Martin, A.; Weihs, R. R.; Ralph, M.

    2017-12-01

    We evaluated the merit in coastal precipitation forecasts by inclusion of high resolution sea surface temperature (SST) from blended satellite and in situ observations as a boundary condition (BC) to the Weather Research and Forecast (WRF) mesoscale model through simple perturbation tests. Our sensitivity analyses shows that the limited improvement of watershed scale precipitation forecast is credible. When only SST BC is changed, there is an uncertainty introduced because of artificial model state equilibrium and the nonlinear nature of the WRF model system. With the change of SST on the order of a fraction of a degree centigrade, we found that the part of random perturbation forecast response is saturated after 48 hours when it reaches to the order magnitude of the linear response. It is important to update the SST at a shorter time period, so that the independent excited nonlinear modes can cancel each other. The uncertainty in our SST configuration is quantitatively equivalent to adding to a spatially uncorrelated Guasian noise of zero mean and 0.05 degree of standard deviation to the SST. At this random noise perturbation magnitude, the ensemble average behaves well within a convergent range. It is also found that the sensitivity of forecast changes in response to SST changes. This is measured by the ratio of the spatial variability of mean of the ensemble perturbations over the spatial variability of the corresponding forecast. The ratio is about 10% for surface latent heat flux, 5 % for IWV, and less than 1% for surface pressure.

  8. A simulation study of capacity utilization to predict future capacity for manufacturing system sustainability

    NASA Astrophysics Data System (ADS)

    Rimo, Tan Hauw Sen; Chai Tin, Ong

    2017-12-01

    Capacity utilization (CU) measurement is an important task in a manufacturing system, especially in make-to-order (MTO) type manufacturing system with product customization, in predicting capacity to meet future demand. A stochastic discrete-event simulation is developed using ARENA software to determine CU and capacity gap (CG) in short run production function. This study focused on machinery breakdown and product defective rate as random variables in the simulation. The study found that the manufacturing system run in 68.01% CU and 31.99% CG. It is revealed that machinery breakdown and product defective rate have a direct relationship with CU. By improving product defective rate into zero defect, manufacturing system can improve CU up to 73.56% and CG decrease to 26.44%. While improving machinery breakdown into zero breakdowns will improve CU up to 93.99% and the CG decrease to 6.01%. This study helps operation level to study CU using “what-if” analysis in order to meet future demand in more practical and easier method by using simulation approach. Further study is recommended by including other random variables that affect CU to make the simulation closer with the real-life situation for a better decision.

  9. Zero-inflated modeling of fish catch per unit area resulting from multiple gears: Application to channel catfish and shovelnose sturgeon in the Missouri River

    USGS Publications Warehouse

    Arab, A.; Wildhaber, M.L.; Wikle, C.K.; Gentry, C.N.

    2008-01-01

    Fisheries studies often employ multiple gears that result in large percentages of zero values. We considered a zero-inflated Poisson (ZIP) model with random effects to address these excessive zeros. By employing a Bayesian ZIP model that simultaneously incorporates data from multiple gears to analyze data from the Missouri River, we were able to compare gears and make more year, segment, and macrohabitat comparisons than did the original data analysis. For channel catfish Ictalurus punctatus, our results rank (highest to lowest) the mean catch per unit area (CPUA) for gears (beach seine, benthic trawl, electrofishing, and drifting trammel net); years (1998 and 1997); macrohabitats (tributary mouth, connected secondary channel, nonconnected secondary channel, and bend); and river segment zones (channelized, inter-reservoir, and least-altered). For shovelnose sturgeon Scaphirhynchus platorynchus, the mean CPUA was significantly higher for benthic trawls and drifting trammel nets; 1998 and 1997; tributary mouths, bends, and connected secondary channels; and some channelized or least-altered inter-reservoir segments. One important advantage of our approach is the ability to reliably infer patterns of relative abundance by means of multiple gears without using gear efficiencies. ?? Copyright by the American Fisheries Society 2008.

  10. Compilation of basal metabolic and blood perfusion rates in various multi-compartment, whole-body thermoregulation models

    NASA Astrophysics Data System (ADS)

    Shitzer, Avraham; Arens, Edward; Zhang, Hui

    2016-07-01

    The assignments of basal metabolic rates (BMR), basal cardiac output (BCO), and basal blood perfusion rates (BBPR) were compared in nine multi-compartment, whole-body thermoregulation models. The data are presented at three levels of detail: total body, specific body regions, and regional body tissue layers. Differences in the assignment of these quantities among the compared models increased with the level of detail, in the above order. The ranges of variability in the total body BMR was 6.5 % relative to the lowest value, with a mean of 84.3 ± 2 W, and in the BCO, it was 8 % with a mean of 4.70 ± 0.13 l/min. The least variability among the body regions is seen in the combined torso (shoulders, thorax, and abdomen: ±7.8 % BMR and ±5.9 % BBPR) and in the combined head (head, face, and neck ±9.9 % BMR and ±10.9 % BBPR), determined by the ratio of the standard deviation to the mean. Much more variability is apparent in the extremities with the most showing in the BMR of the feet (±117 %), followed by the BBPR in the arms (±61.3 %). In the tissue layers, most of the bone layers were assigned zero BMR and BBPR, except in the shoulders and in the extremities that were assigned non-zero values in a number of models. The next lowest values were assigned to the fat layers, with occasional zero values. Skin basal values were invariably non-zero but involved very low values in certain models, e.g., BBPR in the feet and the hands. Muscle layers were invariably assigned high values with the highest found in the thorax, abdomen, and legs. The brain, lung, and viscera layers were assigned the highest of all values of both basal quantities with those of the brain layers showing rather tight ranges of variability in both basal quantities. Average basal values of the "time-seasoned" models presented in this study could be useful as a first step in future modeling efforts subject to appropriate adjustment of values to conform to most recently available and reliable data.

  11. Left-Skew L Distribution Function Application in Hurricane Categories Using its Center-Pressure in Context of Warming Climate

    NASA Astrophysics Data System (ADS)

    Wang, W.

    2017-12-01

    Theory resultsWang wanli left-skew L distribution density function is formula below, its interval is from -∞ to +1 , x indicates center pressure of hurricane, xA represents its long term mean, [(x-xA)/x] is standard random variable on boundary condition f(+1) =0 and f(-∞) =0 Standard variable is negative when x is less than xA ;standard variable is positive when x is more than xA : the standard variable is equal to zero when x is just xA; thus, standard variable is just -∞ if x is zero ,standard variable is also +1 if x is +∞ , finally standard random variable fall into interval of - ∞ 1 to +1 Application in table "-" signal presents individual hurricane center pressure is less than the hurricane long term averaged value; "+" signal presents individual hurricane center pressure is more than the hurricane its mean of long term, of course the mean (xA) is also substituted by other "standard" or "expected value" Tab multi-levels of hurricane strength or intense Index of Hurricane [(X-XA)/X]% XA / X Categories Descriptions X/ XA Probabilities Formula -∞ +∞ → 0 → 0 …… …… …… …… …… …… < -900 > 10.0 < -15 > extreme ( Ⅵ ) < 0.10 -800, -900 9.0, 10.0 -15 extreme ( Ⅵ ) 0.11, 0.10 -700, -800 8.0, 9.0 -14 extreme ( Ⅴ ) 0.13, 0.11 -600, -700 7.0, 8.0 -13 extreme ( Ⅳ ) 0.14, 0.13 -500, -600 6.0, 7.0 -12 extreme ( Ⅲ ) 0.17, 0.14 0.05287 % L(-5.0)- L(-6.0) -400, -500 5.0, 6.0 -11 extreme ( Ⅱ ) 0.20, 0.17 0.003 % L(-4.0)- L(-5.0) -300, -400 4.0, 5.0 -10 extreme ( Ⅰ ) 0.25, 0.20 0.132 % L(-3.0)- L(-4.0) -267, -300 3.67, 4.00 -9 strongest ( Ⅲ )-superior 0.27, 0.25 0.24 % L(-2.67)-L(-3.00) -233, -267 3.33, 3.67 -8 strongest ( Ⅱ )-medium 0.30, 0.27 0.61 % L(-2.33)-L(-2.67) -200, -233 3.00, 3.33 -7 strongest ( Ⅰ )-inferior 0.33, 0.30 1.28 % L(-2.00)- L(-2.33) -167, -200 2.67, 3.00 -6 strong ( Ⅲ )-superior 0.37, 0.33 2.47 % L(-1.67)-L(-2.00) -133, -167 2.33, 2.67 -5 strong ( Ⅱ )-medium 0.43, 0.37 4.43 % L(-1.33)- L(-1.67) -100, -133 2.00, 2.33 -4 strong ( Ⅰ )-inferior 0.50, 0.43 6.69 % L(-1.00) -L(-1.33) -67, -100 1.67, 2.00 -3 normal ( Ⅲ ) -superior 0.60, 0.50 9.27 % L(-0.67)-L(-1.00) -33, -67 1.33, 1.67 -2 normal ( Ⅱ )-medium 0.75, 0.60 11.93 % L(-0.33)-L(-0.67) 00, -33 1.00, 1.33 -1 normal ( Ⅰ )-inferior 1.0, 0.75 12.93 % L(0.00)-L(-0.33) 33, 00 0.67, 1.00 +1 normal 1.49, 1.00 34.79 % L(0.33)-L(0.00) 67, 33 0.33, 0.67 +2 weak 3.03, 1.49 12.12 % L(0.67)-L(0.33) 100, 67 0.00, 0.33 +3 more weaker ∞, 3.03 3.08 % L(1.00)-L(0.67)

  12. Generating and controlling homogeneous air turbulence using random jet arrays

    NASA Astrophysics Data System (ADS)

    Carter, Douglas; Petersen, Alec; Amili, Omid; Coletti, Filippo

    2016-12-01

    The use of random jet arrays, already employed in water tank facilities to generate zero-mean-flow homogeneous turbulence, is extended to air as a working fluid. A novel facility is introduced that uses two facing arrays of individually controlled jets (256 in total) to force steady homogeneous turbulence with negligible mean flow, shear, and strain. Quasi-synthetic jet pumps are created by expanding pressurized air through small straight nozzles and are actuated by fast-response low-voltage solenoid valves. Velocity fields, two-point correlations, energy spectra, and second-order structure functions are obtained from 2D PIV and are used to characterize the turbulence from the integral-to-the Kolmogorov scales. Several metrics are defined to quantify how well zero-mean-flow homogeneous turbulence is approximated for a wide range of forcing and geometric parameters. With increasing jet firing time duration, both the velocity fluctuations and the integral length scales are augmented and therefore the Reynolds number is increased. We reach a Taylor-microscale Reynolds number of 470, a large-scale Reynolds number of 74,000, and an integral-to-Kolmogorov length scale ratio of 680. The volume of the present homogeneous turbulence, the largest reported to date in a zero-mean-flow facility, is much larger than the integral length scale, allowing for the natural development of the energy cascade. The turbulence is found to be anisotropic irrespective of the distance between the jet arrays. Fine grids placed in front of the jets are effective at modulating the turbulence, reducing both velocity fluctuations and integral scales. Varying the jet-to-jet spacing within each array has no effect on the integral length scale, suggesting that this is dictated by the length scale of the jets.

  13. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    PubMed

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  14. Neutrophil chemotaxis in sickle cell anaemia, sickle cell beta zero thalassaemia, and after splenectomy.

    PubMed Central

    Donadi, E A; Falcão, R P

    1987-01-01

    Neutrophil chemotaxis was evaluated in 28 patients with sickle cell anaemia, 10 patient with sickle cell beta zero thalassaemia, 25 patients who had undergone splenectomy, and 38 controls. The mean distance migrated by patients' neutrophils was not significantly different from that of neutrophils from controls. Although several immunological variables have been reported to be changed after loss of splenic function, we were unable to show a defect in neutrophil chemotaxis that could account for the increased susceptibility to infection. PMID:3611395

  15. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  16. Modeling of Transionospheric Radio Propagation

    DTIC Science & Technology

    1975-08-01

    entitled RFMOD, contains the main elements of the scattering theory, the morphological model for ionospheric irregularity strength and other...phasor lies within an elemental area on the complex plain. To begin, we write E as the resultant of its long-term mean (E) and a zero-mean, randomly...totally defined by either of these sets of three parameters (i.e., the three real variances or the real R and the real and imaginary parts of B ). Most

  17. Robust Optimum Invariant Tests for Random MANOVA Models.

    DTIC Science & Technology

    1986-10-01

    are assumed to be independent normal with zero mean and dispersion o2 and o72 respectively, Roy and Gnanadesikan (1959) considered the prob- 2 2 lem of...Part II: The multivariate case. Ann. Math. Statist. 31, 939-968. [7] Roy, S.N. and Gnanadesikan , R. (1959). Some contributions to ANOVA in one or more

  18. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    PubMed

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  19. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  20. Dynamic stability of spinning pretwisted beams subjected to axial random forces

    NASA Astrophysics Data System (ADS)

    Young, T. H.; Gau, C. Y.

    2003-11-01

    This paper studies the dynamic stability of a pretwisted cantilever beam spinning along its longitudinal axis and subjected to an axial random force at the free end. The axial force is assumed as the sum of a constant force and a random process with a zero mean. Due to this axial force, the beam may experience parametric random instability. In this work, the finite element method is first applied to yield discretized system equations. The stochastic averaging method is then adopted to obtain Ito's equations for the response amplitudes of the system. Finally the mean-square stability criterion is utilized to determine the stability condition of the system. Numerical results show that the stability boundary of the system converges as the first three modes are taken into calculation. Before the convergence is reached, the stability condition predicted is not conservative enough.

  1. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Treesearch

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  2. Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.

    NASA Astrophysics Data System (ADS)

    Stossel, Bryan Joseph

    1995-01-01

    Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.

  3. EFFECTIVE ACIDITY CONSTANT BEHAVIOR NEAR ZERO CHARGE CONDITIONS

    EPA Science Inventory

    Surface site (>SOH group) acidity reactions require expressions of the form: Ka = [>SOHn-1(z-1)]aH+EXP(-DG/RT)/[>SOHnz] (where all variables have their usual meaning). One can rearrange this expression to generate an effective acidity constant historically defined as: Qa = Ka...

  4. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  5. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    NASA Astrophysics Data System (ADS)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  6. Genetic analysis of resistance to ticks, gastrointestinal nematodes and Eimeria spp. in Nellore cattle.

    PubMed

    Passafaro, Tiago Luciano; Carrera, Juan Pablo Botero; dos Santos, Livia Loiola; Raidan, Fernanda Santos Silva; dos Santos, Dalinne Chrystian Carvalho; Cardoso, Eduardo Penteado; Leite, Romário Cerqueira; Toral, Fabio Luiz Buranelo

    2015-06-15

    The aim of the present study was to obtain genetic parameters for resistance to ticks, gastrointestinal nematodes (worms) and Eimeria spp. in Nellore cattle, analyze the inclusion of resistance traits in Nellore breeding programs and evaluate genetic selection as a complementary tool in parasite control programs. Counting of ticks, gastrointestinal nematode eggs and Eimeria spp. oocysts per gram of feces totaling 4270; 3872 and 3872 records from 1188; 1142 and 1142 animals, respectively, aged 146 to 597 days were used. The animals were classified as resistant (counts equal to zero) or susceptible (counts above zero) to each parasite. The statistical models included systematics effects of contemporary groups and the mean trajectory. The random effects included additive genetic effects, direct permanent environmental effects and residual. The mean trajectory and random effects were modeled with linear Legendre polynomials for all traits except for the mean trajectory of resistance to Eimeria spp., which employed the cubic polynomial. Heritability estimates were of low to moderate magnitude and ranged from 0.06 to 0.30, 0.06 to 0.33 and 0.04 to 0.33 for resistance to ticks, gastrointestinal nematodes and Eimeria spp., respectively. The posterior mean of genetic and environmental correlations for the same trait at different ages (205, 365, 450 and 550 days) were favorable at adjacent ages and unfavorable at distant ages. In general, the posterior mean of the genetic and environmental correlations between traits of resistance were low and high-density intervals were large and included zero in many cases. The heritability estimates support the inclusion of resistance to ticks, gastrointestinal nematodes and Eimeria spp. in Nellore breeding programs. Genetic selection can increase the frequency of resistant animals and be used as a complementary tool in parasite control programs. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.

    2008-11-06

    This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less

  8. Quantile regression applied to spectral distance decay

    USGS Publications Warehouse

    Rocchini, D.; Cade, B.S.

    2008-01-01

    Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.

  9. Spectral distance decay: Assessing species beta-diversity by quantile regression

    USGS Publications Warehouse

    Rocchinl, D.; Nagendra, H.; Ghate, R.; Cade, B.S.

    2009-01-01

    Remotely sensed data represents key information for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance may allow us to quantitatively estimate how beta-diversity in species changes with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological datasets are characterized by a high number of zeroes that can add noise to the regression model. Quantile regression can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this paper, we used ordinary least square (ols) and quantile regression to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.05) considering both ols and quantile regression. Nonetheless, ols regression estimate of mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when spectral distance approaches zero, was very low compared with the intercepts of upper quantiles, which detected high species similarity when habitats are more similar. In this paper we demonstrated the power of using quantile regressions applied to spectral distance decay in order to reveal species diversity patterns otherwise lost or underestimated by ordinary least square regression. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  10. Multivariate random-parameters zero-inflated negative binomial regression model: an application to estimate crash frequencies at intersections.

    PubMed

    Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan

    2014-09-01

    Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A density-functional study of the phase diagram of cementite-type (Fe,Mn)3C at absolute zero temperature.

    PubMed

    Von Appen, Jörg; Eck, Bernhard; Dronskowski, Richard

    2010-11-15

    The phase diagram of (Fe(1-x) Mn(x))(3)C has been investigated by means of density-functional theory (DFT) calculations at absolute zero temperature. The atomic distributions of the metal atoms are not random-like as previously proposed but we find three different, ordered regions within the phase range. The key role is played by the 8d metal site which forms, as a function of the composition, differing magnetic layers, and these dominate the physical properties. We calculated the magnetic moments, the volumes, the enthalpies of mixing and formation of 13 different compositions and explain the changes of the macroscopic properties with changes in the electronic and magnetic structures by means of bonding analyses using the Crystal Orbital Hamilton Population (COHP) technique. 2010 Wiley Periodicals, Inc.

  12. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  13. A maximally selected test of symmetry about zero.

    PubMed

    Laska, Eugene; Meisner, Morris; Wanderling, Joseph

    2012-11-20

    The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p-values. Power is compared with the t-test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t-test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre-post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on-road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Marginalized zero-inflated negative binomial regression with application to dental caries

    PubMed Central

    Preisser, John S.; Das, Kalyan; Long, D. Leann; Divaris, Kimon

    2015-01-01

    The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared to marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034

  15. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    PubMed Central

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406

  16. Bioavailability study of dronabinol oral solution versus dronabinol capsules in healthy volunteers

    PubMed Central

    Parikh, Neha; Kramer, William G; Khurana, Varun; Cognata Smith, Christina; Vetticaden, Santosh

    2016-01-01

    Background Dronabinol, a pharmaceutical Δ-9-tetrahydrocannabinol, was originally developed as an oral capsule. This study evaluated the bioavailability of a new formulation, dronabinol oral solution, versus a dronabinol capsule formulation. Methods In an open-label, four-period, single-dose, crossover study, healthy volunteers were randomly assigned to one of two treatment sequences (T-R-T-R and R-T-R-T; T = dronabinol 4.25 mg oral solution and R = dronabinol 5 mg capsule) under fasted conditions, with a minimum 7-day washout period between doses. Analyses were performed on venous blood samples drawn 15 minutes to 48 hours postdose, and dronabinol concentrations were assayed by liquid chromatography–tandem mass spectrometry. Results Fifty-one of 52 individuals had pharmacokinetic data for analysis. The 90% confidence interval of the geometric mean ratio (oral solution/capsule) for dronabinol was within the 80%–125% bioequivalence range for area under the plasma concentration–time curve (AUC) from time zero to last measurable concentration (AUC0–t) and AUC from time zero to infinity (AUC0–∞). Maximum plasma concentration was also bioequivalent for the two dronabinol formulations. Intraindividual variability in AUC0–∞ was >60% lower for dronabinol oral solution 4.25 mg versus dronabinol capsule 5 mg. Plasma dronabinol concentrations were detected within 15 minutes postdose in 100% of patients when receiving oral solution and in <25% of patients when receiving capsules. Conclusion Single-dose dronabinol oral solution 4.25 mg was bioequivalent to dronabinol capsule 5 mg under fasted conditions. Dronabinol oral solution formulation may provide an easy-to-swallow administration option with lower intraindividual variability as well as more rapid absorption versus dronabinol capsules. PMID:27785111

  17. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  18. Bioavailability of everolimus administered as a single 5 mg tablet versus five 1 mg tablets: a randomized, open-label, two-way crossover study of healthy volunteers.

    PubMed

    Thudium, Karen; Gallo, Jorge; Bouillaud, Emmanuel; Sachs, Carolin; Eddy, Simantini; Cheung, Wing

    2015-01-01

    The mammalian target of rapamycin (mTOR) inhibitor everolimus has a well-established pharmacokinetics profile. We conducted a randomized, single-center, open-label, two-sequence, two-period crossover study of healthy volunteers to assess the relative bioavailability of everolimus administered as one 5 mg tablet or five 1 mg tablets. Subjects were randomized 1:1 to receive everolimus dosed as one 5 mg tablet or as five 1 mg tablets on day 1, followed by a washout period on days 8-14 and then the opposite formulation on day 15. Blood sampling for pharmacokinetic evaluation was performed at prespecified time points, with 17 samples taken for each treatment period. Primary variables for evaluation of relative bioavailability were area under the concentration-time curve from time zero to infinity (AUCinf) and maximum blood concentration (Cmax). Safety was assessed by reporting the incidence of adverse events (AEs). Twenty-two participants received everolimus as one 5 mg tablet followed by five 1 mg tablets (n=11) or the opposite sequence (n=11). The Cmax of five 1 mg tablets was 48% higher than that of one 5 mg tablet (geometric mean ratio, 1.48; 90% confidence interval [CI], 1.35-1.62). AUCinf was similar (geometric mean ratio, 1.08; 90% CI, 1.02-1.16), as were the extent of absorption and the distribution and elimination kinetics. AEs, all grade 1 or 2, were observed in 54.5% of subjects. Although the extent of absorption was similar, the Cmax of five 1 mg tablets was higher than that of one 5 mg tablet, suggesting these formulations lead to different peak blood concentrations and are not interchangeable at the dose tested.

  19. Temporal framing and the hidden-zero effect: rate-dependent outcomes on delay discounting.

    PubMed

    Naudé, Gideon P; Kaplan, Brent A; Reed, Derek D; Henley, Amy J; DiGennaro Reed, Florence D

    2018-05-01

    Recent research suggests that presenting time intervals as units (e.g., days) or as specific dates, can modulate the degree to which humans discount delayed outcomes. Another framing effect involves explicitly stating that choosing a smaller-sooner reward is mutually exclusive to receiving a larger-later reward, thus presenting choices as an extended sequence. In Experiment 1, participants (N = 201) recruited from Amazon Mechanical Turk completed the Monetary Choice Questionnaire in a 2 (delay framing) by 2 (zero framing) design. Regression suggested a main effect of delay, but not zero, framing after accounting for other demographic variables and manipulations. We observed a rate-dependent effect for the date-framing group, such that those with initially steep discounting exhibited greater sensitivity to the manipulation than those with initially shallow discounting. Subsequent analyses suggest these effects cannot be explained by regression to the mean. Experiment 2 addressed the possibility that the null effect of zero framing was due to within-subject exposure to the hidden- and explicit-zero conditions. A new Amazon Mechanical Turk sample completed the Monetary Choice Questionnaire in either hidden- or explicit-zero formats. Analyses revealed a main effect of reward magnitude, but not zero framing, suggesting potential limitations to the generality of the hidden-zero effect. © 2018 Society for the Experimental Analysis of Behavior.

  20. Crash Frequency Analysis Using Hurdle Models with Random Effects Considering Short-Term Panel Data

    PubMed Central

    Chen, Feng; Ma, Xiaoxiang; Chen, Suren; Yang, Lin

    2016-01-01

    Random effect panel data hurdle models are established to research the daily crash frequency on a mountainous section of highway I-70 in Colorado. Road Weather Information System (RWIS) real-time traffic and weather and road surface conditions are merged into the models incorporating road characteristics. The random effect hurdle negative binomial (REHNB) model is developed to study the daily crash frequency along with three other competing models. The proposed model considers the serial correlation of observations, the unbalanced panel-data structure, and dominating zeroes. Based on several statistical tests, the REHNB model is identified as the most appropriate one among four candidate models for a typical mountainous highway. The results show that: (1) the presence of over-dispersion in the short-term crash frequency data is due to both excess zeros and unobserved heterogeneity in the crash data; and (2) the REHNB model is suitable for this type of data. Moreover, time-varying variables including weather conditions, road surface conditions and traffic conditions are found to play importation roles in crash frequency. Besides the methodological advancements, the proposed technology bears great potential for engineering applications to develop short-term crash frequency models by utilizing detailed data from field monitoring data such as RWIS, which is becoming more accessible around the world. PMID:27792209

  1. On estimating gravity anomalies - A comparison of least squares collocation with conventional least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1977-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.

  2. On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, A. K.

    1973-01-01

    Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.

  3. Efficacy of a comfrey root extract ointment in comparison to a diclofenac gel in the treatment of ankle distortions: results of an observer-blind, randomized, multicenter study.

    PubMed

    Predel, H G; Giannetti, B; Koll, R; Bulitta, M; Staiger, C

    2005-11-01

    In the treatment of minor blunt injuries several topical drugs are known to have anti-inflammatory and analgesic properties. They represent, however, two fundamentally different major pharmacological therapy approaches: the "chemical-synthetical" and the "phytotherapeutical" approach. The main objective of this trial (CODEC_2004) was to compare the efficacy and tolerability of an ointment of Comfrey extract (Extr. Rad. Symphyti) with that of a Diclofenac gel in the treatment of acute unilateral ankle sprain (distortion). In a single-blind, controlled, randomized, parallel-group, multicenter and confirmatory clinical trial outpatients with acute unilateral ankle sprains (n=164, mean age 29.0 years, 47.6% female) received either a 6 cm long ointment layer of Kytta-Salbe f (Comfrey extract) (n=82) or of Diclofenac gel containing 1.16 g of diclofenac diethylamine salt (n=82) for 7 +/- 1 days, four times a day. Primary variable was the area-under-the-curve (AUC) of the pain reaction to pressure on the injured area measured by a calibrated caliper (tonometer). Secondary variables were the circumference of the joint (swelling; figure-of-eight method), the individual spontaneous pain sensation at rest and at movement according to a Visual Analogue Scale (VAS), the judgment of impaired movements of the injured joint by the method of "neutral-zero", consumption of rescue medication (paracetamol), as well as the global efficacy evaluation and the global assessment of tolerability (both by physician and patient, 4 ranks). In this study the primary variable was also to be validated prospectively. It was confirmatorily shown that Comfrey extract is non-inferior to diclofenac. The 95% confidence interval for the AUC (Comfrey extract minus Diclofenac gel) was 19.01-103.09h*N/cm2 and was completely above the margin of non-inferiority. Moreover, the results of the primary and secondary variables indicate that Comfrey extract may be superior to Diclofenac gel.

  4. Poisson-Like Spiking in Circuits with Probabilistic Synapses

    PubMed Central

    Moreno-Bote, Rubén

    2014-01-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  5. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Sound propagation through a variable area duct - Experiment and theory

    NASA Technical Reports Server (NTRS)

    Silcox, R. J.; Lester, H. C.

    1981-01-01

    A comparison of experiment and theory has been made for the propagation of sound through a variable area axisymmetric duct with zero mean flow. Measurement of the acoustic pressure field on both sides of the constricted test section was resolved on a modal basis for various spinning mode sources. Transmitted and reflected modal amplitudes and phase angles were compared with finite element computations. Good agreement between experiment and computation was obtained over a wide range of frequencies and modal transmission variations. The study suggests that modal transmission through a variable area duct is governed by the throat modal cut-off ratio.

  7. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick

    2014-09-15

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less

  8. Effects of coccidiosis vaccination administered by in ovo injection on Ross 708 broiler performance through 14 days of post-hatch age.

    PubMed

    Sokale, A O; Zhai, W; Pote, L M; Williams, C J; Peebles, E D

    2017-08-01

    Effects of the in ovo injection of a commercial coccidiosis vaccine on various hatching chick quality variables and 14 d post-hatch (dph) oocyst shedding have been previously examined. The current study was designed to examine the performance of Ross 708 broilers during the 14 dph period of oocyst shedding following the application of the coccidiosis vaccine. On each of 7 replicate tray levels of a single-stage incubator, a total of 4 treatment groups was randomly represented, with each treatment group containing 63 eggs. Treatments were administered using a commercial multi-egg injector on d 18.5 of incubation. The treatments included 3 control groups (non-injected, dry-punch, and diluent-injected) and one treatment group (injected with diluent containing Inovocox EM1 vaccine). On d 21 of incubation, 20 chicks from each of the 28 treatment-replicate groups were placed in corresponding wire-floored battery cages. Mortality, feed intake (FI), BW gain (BWG), and feed conversion ratio (FCR) were determined for the zero to 7, 7 to 14, and cumulative zero to 14 dph intervals. There were no significant treatment effects on mortality in any interval or on BW at zero dph. There were significant treatment effects on BW at 7 and 14 dph, on BWG and FI in the zero to 7, 7 to 14, and zero to 14 dph intervals, and on FCR in the 7 to 14 and zero to 14 dph intervals. Although the performance variables of birds belonging to the diluent-injected and vaccine-injected groups were not significantly different, the 14 dph BW, 7 to 14 dph FI, and zero to 14 dph BWG and FI of birds belonging to the vaccine treatment group were significantly higher than those in birds belonging to the non-injected control group. It was concluded that use of the Inovocox EM1 vaccine in commercial diluent has no detrimental effect on the overall post-hatch performance of broilers through 14 dph. © 2017 Poultry Science Association Inc.

  9. Modeling continuous covariates with a "spike" at zero: Bivariate approaches.

    PubMed

    Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi

    2016-07-01

    In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Genetic and metabolic diversity in Stevia rebaudiana using RAPD and HPTLC analysis.

    PubMed

    Chester, Karishma; Tamboli, Ennus Tajuddin; Parveen, Rabea; Ahmad, Sayeed

    2013-06-01

    Stevia rebaudiana Bertoni (Asteraceae) is an important medicinal plant and is much used due to its zero calories sweetening property. Stevia leaves as well as its extracts and pure compounds are currently used in the preparation of several medicines, food products and neutraceuticals. To study the genetic and metabolic variability in S. rebaudiana among accessions of different geographical regions of India using random amplified polymorphic DNA (RAPD) markers and high-performance thin layer chromatography (HPTLC) analysis. The RAPD analysis of Stevia rebaudiana (11 accessions) was carried out using 20 random operon primers. Dendrogram was constructed for cluster analysis based on the unweighted pair group method with arithmetic means (UPGMA) using Winboot. The HPTLC analysis of all samples was carried out on silica using acetone:ethyl acetate:water (5:4:1, v/v/v) for fingerprinting and quantification of stevioside and rebaudioside A at 360 nm after spraying with anisaldehyde sulphuric acid. Ten out of 20 primers screened were found most informative; amplification products of the genotypes yielded a total of 87 scorable bands (67 polymorphic), whereas genetic similarity (GS) coefficient (0.01-0.08) and polymorphism (67.24-92.40%) showed huge variability. Similarly, HPTLC analysis showed large variation among different samples with respect to their presence or absence of metabolite and their concentration. Out of the 11 Stevia accessions, Delhi and Mohali varieties showed much relatedness with each other and were concluded to be the superior genotype in context to RAPD and HPTLC analysis. The information obtained here could be valuable for devising strategies for cultivating this medicinal plant.

  11. Bioequivalence Study of Rivastigmine 6 mg Capsules (Single Dose) in Healthy Volunteers.

    PubMed

    Abhyankar, Dhiraj; Shedage, Ashish; Gole, Milind; Raut, Preeti

    2017-09-01

    To assess the bioequivalence of generic formulation of rivastigmine (test) and Exelon (reference). This randomized, open-label, 2-period, single-dose, 2-treatment, 2-sequence, crossover study was conducted in 40 healthy men under fed condition. Participants were randomized to receive a single dose of Exelon or rivastigmine capsule. A total of 31 participants completed the study. Area under the concentration-time curve from time zero to time t (AUC 0- t ) and area under the concentration-time curve from time zero to infinity (AUC 0-∞ ) for Exelon (mean [standard deviation], h·ng/mL) were 126.40 (56.95) and 129.46 (59.94), respectively, while they were 122.73 (43.46) and 125.08 (45.39) for rivastigmine. Geometric mean ratios of rivastigmine/Exelon were 99.17% for AUC 0- t , 98.81% for AUC 0-∞ , and 105% for maximum observed plasma concentration ( C max ). The 90% confidence intervals (CIs) were 94.14% to 104.46%, 93.77% to 104.12%, and 93.08% to 118.44%, respectively. Both formulations were well tolerated. The generic and reference formulations were bioequivalent, as the 90% CIs for C max , AUC 0- t , and AUC 0-∞ were within the range of 80% to 125%.

  12. Long-time predictability in disordered spin systems following a deep quench

    NASA Astrophysics Data System (ADS)

    Ye, J.; Gheissari, R.; Machta, J.; Newman, C. M.; Stein, D. L.

    2017-04-01

    We study the problem of predictability, or "nature vs nurture," in several disordered Ising spin systems evolving at zero temperature from a random initial state: How much does the final state depend on the information contained in the initial state, and how much depends on the detailed history of the system? Our numerical studies of the "dynamical order parameter" in Edwards-Anderson Ising spin glasses and random ferromagnets indicate that the influence of the initial state decays as dimension increases. Similarly, this same order parameter for the Sherrington-Kirkpatrick infinite-range spin glass indicates that this information decays as the number of spins increases. Based on these results, we conjecture that the influence of the initial state on the final state decays to zero in finite-dimensional random-bond spin systems as dimension goes to infinity, regardless of the presence of frustration. We also study the rate at which spins "freeze out" to a final state as a function of dimensionality and number of spins; here the results indicate that the number of "active" spins at long times increases with dimension (for short-range systems) or number of spins (for infinite-range systems). We provide theoretical arguments to support these conjectures, and also study analytically several mean-field models: the random energy model, the uniform Curie-Weiss ferromagnet, and the disordered Curie-Weiss ferromagnet. We find that for these models, the information contained in the initial state does not decay in the thermodynamic limit—in fact, it fully determines the final state. Unlike in short-range models, the presence of frustration in mean-field models dramatically alters the dynamical behavior with respect to the issue of predictability.

  13. Long-time predictability in disordered spin systems following a deep quench.

    PubMed

    Ye, J; Gheissari, R; Machta, J; Newman, C M; Stein, D L

    2017-04-01

    We study the problem of predictability, or "nature vs nurture," in several disordered Ising spin systems evolving at zero temperature from a random initial state: How much does the final state depend on the information contained in the initial state, and how much depends on the detailed history of the system? Our numerical studies of the "dynamical order parameter" in Edwards-Anderson Ising spin glasses and random ferromagnets indicate that the influence of the initial state decays as dimension increases. Similarly, this same order parameter for the Sherrington-Kirkpatrick infinite-range spin glass indicates that this information decays as the number of spins increases. Based on these results, we conjecture that the influence of the initial state on the final state decays to zero in finite-dimensional random-bond spin systems as dimension goes to infinity, regardless of the presence of frustration. We also study the rate at which spins "freeze out" to a final state as a function of dimensionality and number of spins; here the results indicate that the number of "active" spins at long times increases with dimension (for short-range systems) or number of spins (for infinite-range systems). We provide theoretical arguments to support these conjectures, and also study analytically several mean-field models: the random energy model, the uniform Curie-Weiss ferromagnet, and the disordered Curie-Weiss ferromagnet. We find that for these models, the information contained in the initial state does not decay in the thermodynamic limit-in fact, it fully determines the final state. Unlike in short-range models, the presence of frustration in mean-field models dramatically alters the dynamical behavior with respect to the issue of predictability.

  14. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    PubMed

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  15. Quantized vortices in the ideal bose gas: a physical realization of random polynomials.

    PubMed

    Castin, Yvan; Hadzibabic, Zoran; Stock, Sabine; Dalibard, Jean; Stringari, Sandro

    2006-02-03

    We propose a physical system allowing one to experimentally observe the distribution of the complex zeros of a random polynomial. We consider a degenerate, rotating, quasi-ideal atomic Bose gas prepared in the lowest Landau level. Thermal fluctuations provide the randomness of the bosonic field and of the locations of the vortex cores. These vortices can be mapped to zeros of random polynomials, and observed in the density profile of the gas.

  16. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  17. A study of fractional Schrödinger equation composed of Jumarie fractional derivative

    NASA Astrophysics Data System (ADS)

    Banerjee, Joydip; Ghosh, Uttam; Sarkar, Susmita; Das, Shantanu

    2017-04-01

    In this paper we have derived the fractional-order Schrödinger equation composed of Jumarie fractional derivative. The solution of this fractional-order Schrödinger equation is obtained in terms of Mittag-Leffler function with complex arguments, and fractional trigonometric functions. A few important properties of the fractional Schrödinger equation are then described for the case of particles in one-dimensional infinite potential well. One of the motivations for using fractional calculus in physical systems is that the space and time variables, which we often deal with, exhibit coarse-grained phenomena. This means infinitesimal quantities cannot be arbitrarily taken to zero - rather they are non-zero with a minimum spread. This type of non-zero spread arises in the microscopic to mesoscopic levels of system dynamics, which means that, if we denote x as the point in space and t as the point in time, then limit of the differentials d x (and d t) cannot be taken as zero. To take the concept of coarse graining into account, use the infinitesimal quantities as (Δ x) α (and (Δ t) α ) with 0 < α < 1; called as `fractional differentials'. For arbitrarily small Δ x and Δ t (tending towards zero), these `fractional' differentials are greater than Δ x (and Δ t), i.e. (Δ x) α > Δ x and (Δ t) α > Δ t. This way of defining the fractional differentials helps us to use fractional derivatives in the study of dynamic systems.

  18. Highly variable sperm precedence in the stalk-eyed fly, Teleopsis dalmanni

    PubMed Central

    Corley, Laura S; Cotton, Samuel; McConnell, Ellen; Chapman, Tracey; Fowler, Kevin; Pomiankowski, Andrew

    2006-01-01

    Background When females mate with different males, competition for fertilizations occurs after insemination. Such sperm competition is usually summarized at the level of the population or species by the parameter, P2, defined as the proportion of offspring sired by the second male in double mating trials. However, considerable variation in P2 may occur within populations, and such variation limits the utility of population-wide or species P2 estimates as descriptors of sperm usage. To fully understand the causes and consequences of sperm competition requires estimates of not only mean P2, but also intra-specific variation in P2. Here we investigate within-population quantitative variation in P2 using a controlled mating experiment and microsatellite profiling of progeny in the multiply mating stalk-eyed fly, Teleopsis dalmanni. Results We genotyped 381 offspring from 22 dam-sire pair families at four microsatellite loci. The mean population-wide P2 value of 0.40 was not significantly different from that expected under random sperm mixing (i.e. P2 = 0.5). However, patterns of paternity were highly variable between individual families; almost half of families displayed extreme second male biases resulting in zero or complete paternity, whereas only about one third of families had P2 values of 0.5, the remainder had significant, but moderate, paternity skew. Conclusion Our data suggest that all modes of ejaculate competition, from extreme sperm precedence to complete sperm mixing, occur in T. dalmanni. Thus the population mean P2 value does not reflect the high underlying variance in familial P2. We discuss some of the potential causes and consequences of post-copulatory sexual selection in this important model species. PMID:16800877

  19. Multilevel covariance regression with correlated random effects in the mean and variance structure.

    PubMed

    Quintero, Adrian; Lesaffre, Emmanuel

    2017-09-01

    Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Generated effect modifiers (GEM’s) in randomized clinical trials

    PubMed Central

    Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R. Todd

    2017-01-01

    In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an “effect modifier”. Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. PMID:27465235

  1. Quantum Entanglement in Random Physical States

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo

    2012-07-01

    Most states in the Hilbert space are maximally entangled. This fact has proven useful to investigate—among other things—the foundations of statistical mechanics. Unfortunately, most states in the Hilbert space of a quantum many-body system are not physically accessible. We define physical ensembles of states acting on random factorized states by a circuit of length k of random and independent unitaries with local support. We study the typicality of entanglement by means of the purity of the reduced state. We find that for a time k=O(1), the typical purity obeys the area law. Thus, the upper bounds for area law are actually saturated, on average, with a variance that goes to zero for large systems. Similarly, we prove that by means of local evolution a subsystem of linear dimensions L is typically entangled with a volume law when the time scales with the size of the subsystem. Moreover, we show that for large values of k the reduced state becomes very close to the completely mixed state.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less

  3. Interpreting findings from Mendelian randomization using the MR-Egger method.

    PubMed

    Burgess, Stephen; Thompson, Simon G

    2017-05-01

    Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.

  4. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols.

    PubMed

    Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N

    2018-05-01

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.

  5. Scattering of Internal Tides by Irregular Bathymetry of Large Extent

    NASA Astrophysics Data System (ADS)

    Mei, C.

    2014-12-01

    We present an analytic theory of scattering of tide-generated internal gravity waves in a continuously stratified ocean with a randomly rough seabed. Based on the linearized approximation, the idealized case of constant mean sea depth and Brunt-Vaisala frequency is considered. The depth fluctuation is assumed to be a stationary random function of space characterized by small amplitude and correlation length comparable to the typical wavelength. For both one- and two-dimensional topography the effects of scattering on wave phase over long distances are derived explicitly by the method of multiple scales. For one-dimensional topography, numerical results are compared with Buhler-& Holmes-Cerfon(2011) computed by the method of characteristics. For two-dimensional topography, new results are presented for both statistically isotropic and anisotropic cases. In thi talk we shall apply the perturbation technique of multiple scales to treat analytically the random scattering of internal tides by gently sloped bathymetric irregularities.The basic assumptions are: incompressible fluid, infinitestimal wave amplitudes, constant Brunt-Vaisala frequency, and constant mean depth. In addition, the depth disorder is assumed to be a stationary random function of space with zero mean and small root-mean-square amplitude. The correlation length can be comparable in order of magnitude as the dominant wavelength. Both one- and two-dimensional disorder will be considered. Physical effects of random scattering on the mean wave phase i.e., spatial attenuation and wavenumber shift will be calculated and discussed for one mode of incident wave. For two dimensional topographies, statistically isotropic and anisotropic examples will be presented.

  6. Margins of stability in young adults with traumatic transtibial amputation walking in destabilizing environments✫

    PubMed Central

    Beltran, Eduardo J.; Dingwell, Jonathan B.; Wilken, Jason M.

    2014-01-01

    Understanding how lower-limb amputation affects walking stability, specifically in destabilizing environments, is essential for developing effective interventions to prevent falls. This study quantified mediolateral margins of stability (MOS) and MOS sub-components in young individuals with traumatic unilateral transtibial amputation (TTA) and young able-bodied individuals (AB). Thirteen AB and nine TTA completed five 3-minute walking trials in a Computer Assisted Rehabilitation ENvironment (CAREN) system under three each of three test conditions: no perturbations, pseudo-random mediolateral translations of the platform, and pseudo-random mediolateral translations of the visual field. Compared to the unperturbed trials, TTA exhibited increased mean MOS and MOS variability during platform and visual field perturbations (p < 0.010). Also, AB exhibited increased mean MOS during visual field perturbations and increased MOS variability during both platform and visual field perturbations (p < 0.050). During platform perturbations, TTA exhibited significantly greater values than AB for mean MOS (p < 0.050) and MOS variability (p < 0.050); variability of the lateral distance between the center of mass (COM) and base of support at initial contact (p < 0.005); mean and variability of the range of COM motion (p < 0.010); and variability of COM peak velocity (p < 0.050). As determined by mean MOS and MOS variability, young and otherwise healthy individuals with transtibial amputation achieved stability similar to that of their able-bodied counterparts during unperturbed and visually-perturbed walking. However, based on mean and variability of MOS, unilateral transtibial amputation was shown to have affected walking stability during platform perturbations. PMID:24444777

  7. Characteristics of buoyancy force on stagnation point flow with magneto-nanoparticles and zero mass flux condition

    NASA Astrophysics Data System (ADS)

    Uddin, Iftikhar; Khan, Muhammad Altaf; Ullah, Saif; Islam, Saeed; Israr, Muhammad; Hussain, Fawad

    2018-03-01

    This attempt dedicated to the solution of buoyancy effect over a stretching sheet in existence of MHD stagnation point flow with convective boundary conditions. Thermophoresis and Brownian motion aspects are included. Incompressible fluid is electrically conducted in the presence of varying magnetic field. Boundary layer analysis is used to develop the mathematical formulation. Zero mass flux condition is considered at the boundary. Non-linear ordinary differential system of equations is constructed by means of proper transformations. Interval of convergence via numerical data and plots are developed. Characteristics of involved variables on the velocity, temperature and concentration distributions are sketched and discussed. Features of correlated parameters on Cf and Nu are examined by means of tables. It is found that buoyancy ratio and magnetic parameters increase and reduce the velocity field. Further opposite feature is noticed for higher values of thermophoresis and Brownian motion parameters on concentration distribution.

  8. A note on nonlinearity bias and dichotomous choice CVM: implications for aggregate benefits estimation

    Treesearch

    R.A. Souter; J. Michael Bowker

    1996-01-01

    It is a generally known statistical fact that the mean of a nonlinear function of a set of random variables is not equivalent to the function evaluated at the means of the variables. However, in dichotomous choice contingent valuation studies, a common practice is to calculate an overall mean (or median) by integrating over offer space (numerically or analytically) an...

  9. Random isotropic one-dimensional XY-model

    NASA Astrophysics Data System (ADS)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  10. Breakage mechanics—Part I: Theory

    NASA Astrophysics Data System (ADS)

    Einav, Itai

    2007-06-01

    Different measures have been suggested for quantifying the amount of fragmentation in randomly compacted crushable aggregates. A most effective and popular measure is to adopt variants of Hardin's [1985. Crushing of soil particles. J. Geotech. Eng. ASCE 111(10), 1177-1192] definition of relative breakage ' Br'. In this paper we further develop the concept of breakage to formulate a new continuum mechanics theory for crushable granular materials based on statistical and thermomechanical principles. Analogous to the damage internal variable ' D' which is used in continuum damage mechanics (CDM), here the breakage internal variable ' B' is adopted. This internal variable represents a particular form of the relative breakage ' Br' and measures the relative distance of the current grain size distribution from the initial and ultimate distributions. Similar to ' D', ' B' varies from zero to one and describes processes of micro-fractures and the growth of surface area. However, unlike damage that is most suitable to tensioned solid-like materials, the breakage is aimed towards compressed granular matter. While damage effectively represents the opening of micro-cavities and cracks, breakage represents comminution of particles. We term the new theory continuum breakage mechanics (CBM), reflecting the analogy with CDM. A focus is given to developing fundamental concepts and postulates, and identifying the physical meaning of the various variables. In this part of the paper we limit the study to describe an ideal dissipative process that includes breakage without plasticity. Plastic strains are essential, however, in representing aspects that relate to frictional dissipation, and this is covered in Part II of this paper together with model examples.

  11. Zero field reversal probability in thermally assisted magnetization reversal

    NASA Astrophysics Data System (ADS)

    Prasetya, E. B.; Utari; Purnama, B.

    2017-11-01

    This paper discussed about zero field reversal probability in thermally assisted magnetization reversal (TAMR). Appearance of reversal probability in zero field investigated through micromagnetic simulation by solving stochastic Landau-Lifshitz-Gibert (LLG). The perpendicularly anisotropy magnetic dot of 50×50×20 nm3 is considered as single cell magnetic storage of magnetic random acces memory (MRAM). Thermally assisted magnetization reversal was performed by cooling writing process from near/almost Curie point to room temperature on 20 times runs for different randomly magnetized state. The results show that the probability reversal under zero magnetic field decreased with the increase of the energy barrier. The zero-field probability switching of 55% attained for energy barrier of 60 k B T and the reversal probability become zero noted at energy barrier of 2348 k B T. The higest zero-field switching probability of 55% attained for energy barrier of 60 k B T which corespond to magnetif field of 150 Oe for switching.

  12. Habit Reversal versus Object Manipulation Training for Treating Nail Biting: A Randomized Controlled Clinical Trial

    PubMed Central

    Ghanizadeh, Ahmad; Bazrafshan, Amir; Dehbozorgi, Gholamreza

    2013-01-01

    Objective This is a parallel, three group, randomized, controlled clinical trial, with outcomes evaluated up to three months after randomization for children and adolescents with chronic nail biting. The current study investigates the efficacy of habit reversal training (HRT) and compares its effect with object manipulation training (OMT) considering the limitations of the current literature. Method Ninety one children and adolescents with nail biting were randomly allocated to one of the three groups. The three groups were HRT (n = 30), OMT (n = 30), and wait-list or control group (n = 31). The mean length of nail was considered as the main outcome. Results The mean length of the nails after one month in HRT and OMT groups increased compared to the waiting list group (P < 0.001, P < 0.001, respectively). In long term, both OMT and HRT increased the mean length of nails (P < 0.01), but HRT was more effective than OMT (P < 0.021). The parent-reported frequency of nail biting did show similar results as to the mean length of nails assessment in long term. The number of children who completely stopped nail biting in HRT and OMT groups during three months was 8 and 7, respectively. This number was zero during one month for the wait-list group. Conclusion This trial showed that HRT is more effective than wait-list and OMT in increasing the mean length of nails of children and adolescents in long terms. PMID:24130603

  13. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  14. Summary of percentages of zero daily mean streamflow for 712 U.S. Geological Survey streamflow-gaging stations in Texas through 2003

    USGS Publications Warehouse

    Asquith, William H.; Vrabel, Joseph; Roussel, Meghan C.

    2007-01-01

    Analysts and managers of surface-water resources might have interest in the zero-flow potential for U.S.Geological Survey (USGS) streamflow-gaging stations in Texas. The USGS, in cooperation with the Texas Commission on Environmental Quality, initiated a data and reporting process to generate summaries of percentages of zero daily mean streamflow for 712 USGS streamflow-gaging stations in Texas. A summary of the percentages of zero daily mean streamflow for most active and inactive, continuous-record gaging stations in Texas provides valuable information by conveying the historical perspective for zero-flow potential for the watershed. The summaries of percentages of zero daily mean streamflow for each station are graphically depicted using two thematic perspectives: annual and monthly. The annual perspective consists of graphs of annual percentages of zero streamflow by year with the addition of lines depicting the mean and median annual percentage of zero streamflow. Monotonic trends in the percentages of zero streamflow also are identified using Kendall's T. The monthly perspective consists of graphs of the percentage of zero streamflow by month with lines added to indicate the mean and median monthly percentage of zero streamflow. One or more summaries could be used in a watershed, river basin, or other regional context by analysts and managers of surface-water resources to guide scientific, regulatory, or other inquiries of zero-flow or other low-flow conditions in Texas.

  15. Entropy Inequalities for Stable Densities and Strengthened Central Limit Theorems

    NASA Astrophysics Data System (ADS)

    Toscani, Giuseppe

    2016-10-01

    We consider the central limit theorem for stable laws in the case of the standardized sum of independent and identically distributed random variables with regular probability density function. By showing decay of different entropy functionals along the sequence we prove convergence with explicit rate in various norms to a Lévy centered density of parameter λ >1 . This introduces a new information-theoretic approach to the central limit theorem for stable laws, in which the main argument is shown to be the relative fractional Fisher information, recently introduced in Toscani (Ricerche Mat 65(1):71-91, 2016). In particular, it is proven that, with respect to the relative fractional Fisher information, the Lévy density satisfies an analogous of the logarithmic Sobolev inequality, which allows to pass from the monotonicity and decay to zero of the relative fractional Fisher information in the standardized sum to the decay to zero in relative entropy with an explicit decay rate.

  16. Random one-of-N selector

    DOEpatents

    Kronberg, J.W.

    1993-04-20

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  17. Random one-of-N selector

    DOEpatents

    Kronberg, James W.

    1993-01-01

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  18. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  19. Does manual therapy improve pain and function in patients with plantar fasciitis? A systematic review.

    PubMed

    Fraser, John J; Corbett, Revay; Donner, Chris; Hertel, Jay

    2018-05-01

    To assess if manual therapy (MT) in the treatment of plantar fasciitis (PF) patients improves pain and function more effectively than other interventions. A systematic review of all randomized control trials (RCTs) investigating the effects of MT in the treatment of human patients with PF, plantar fasciosis, and heel pain published in English on PubMed, CINAHL, Cochrane, and Web of Science databases was conducted. Research quality was appraised utilizing the PEDro scale. Cohen's d effect sizes (ES) and associated 95% confidence intervals (CI) were calculated between treatment groups. Seven RCTs were selected that employed MT as a primary independent variable and pain and function as dependent variables. Inclusion of MT in treatment yielded greater improvement in function (6 of 7 studies, CI that did not cross zero in 14 of 25 variables, ES = 0.5-21.5) and algometry (3 of 3 studies, CI that did not cross zero in 9 of 10 variables, ES = 0.7-3.0) from 4 weeks to 6 months when compared to interventions such as stretching, strengthening, or modalities. Though pain improved with the inclusion of MT, ES calculations favored MT in only 2 of 6 studies (3 of 13 variables) and was otherwise equivalent in effectiveness to comparison interventions. MT is clearly associated with improved function and may be associated with pain reduction in PF patients. It is recommended that clinicians consider use of both joint and soft tissue mobilization techniques in conjunction with stretching and strengthening when treating patients with PF. Treatment, level 1a.

  20. RESIDENTIAL EXPOSURE TO EXTREMELY LOW FREQUENCY ELECTRIC AND MAGNETIC FIELDS IN THE CITY OF RAMALLAH-PALESTINE.

    PubMed

    Abuasbi, Falastine; Lahham, Adnan; Abdel-Raziq, Issam Rashid

    2018-04-01

    This study was focused on the measurement of residential exposure to power frequency (50-Hz) electric and magnetic fields in the city of Ramallah-Palestine. A group of 32 semi-randomly selected residences distributed amongst the city were under investigations of fields variations. Measurements were performed with the Spectrum Analyzer NF-5035 and were carried out at one meter above ground level in the residence's bedroom or living room under both zero and normal-power conditions. Fields' variations were recorded over 6-min and some times over few hours. Electric fields under normal-power use were relatively low; ~59% of residences experienced mean electric fields <10 V/m. The highest mean electric field of 66.9 V/m was found at residence R27. However, electric field values were log-normally distributed with geometric mean and geometric standard deviation of 9.6 and 3.5 V/m, respectively. Background electric fields measured under zero-power use, were very low; ~80% of residences experienced background electric fields <1 V/m. Under normal-power use, the highest mean magnetic field (0.45 μT) was found at residence R26 where an indoor power substation exists. However, ~81% of residences experienced mean magnetic fields <0.1 μT. Magnetic fields measured inside the 32 residences showed also a log-normal distribution with geometric mean and geometric standard deviation of 0.04 and 3.14 μT, respectively. Under zero-power conditions, ~7% of residences experienced average background magnetic field >0.1 μT. Fields from appliances showed a maximum mean electric field of 67.4 V/m from hair dryer, and maximum mean magnetic field of 13.7 μT from microwave oven. However, no single result surpassed the ICNIRP limits for general public exposures to ELF fields, but still, the interval 0.3-0.4 μT for possible non-thermal health impacts of exposure to ELF magnetic fields, was experienced in 13% of the residences.

  1. The Cost of Accumulating Evidence in Perceptual Decision Making

    PubMed Central

    Drugowitsch, Jan; Moreno-Bote, Rubén; Churchland, Anne K.; Shadlen, Michael N.; Pouget, Alexandre

    2012-01-01

    Decision making often involves the accumulation of information over time, but acquiring information typically comes at a cost. Little is known about the cost incurred by animals and humans for acquiring additional information from sensory variables, due, for instance, to attentional efforts. Through a novel integration of diffusion models and dynamic programming, we were able to estimate the cost of making additional observations per unit of time from two monkeys and six humans in a reaction time random dot motion discrimination task. Surprisingly, we find that, the cost is neither zero nor constant over time, but for the animals and humans features a brief period in which it is constant but increases thereafter. In addition, we show that our theory accurately matches the observed reaction time distributions for each stimulus condition, the time-dependent choice accuracy both conditional on stimulus strength and independent of it, and choice accuracy and mean reaction times as a function of stimulus strength. The theory also correctly predicts that urgency signals in the brain should be independent of the difficulty, or stimulus strength, at each trial. PMID:22423085

  2. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  3. Heart Rate and Blood Pressure Variability under Moon, Mars and Zero Gravity Conditions During Parabolic Flights

    NASA Astrophysics Data System (ADS)

    Aerts, Wouter; Joosen, Pieter; Widjaja, Devy; Varon, Carolina; Vandeput, Steven; Van Huffel, Sabine; Aubert, Andre E.

    2013-02-01

    Gravity changes during partial-G parabolic flights (0g -0.16g - 0.38g) lead to changes in modulation of the autonomic nervous system (ANS), studied via the heart rate variability (HRV) and blood pressure variability (BPV). HRV and BPV were assessed via classical time and frequency domain measures. Mean systolic and diastolic blood pressure show both increasing trends towards higher gravity levels. The parasympathetic and sympathetic modulation show both an increasing trend with decreasing gravity, although the modulation is sympathetic predominant during reduced gravity. For the mean heart rate, a non-monotonic relation was found, which can be explained by the increased influence of stress on the heart rate. This study shows that there is a relation between changes in gravity and modulations in the ANS. With this in mind, countermeasures can be developed to reduce postflight orthostatic intolerance.

  4. Blood pressure variability of two ambulatory blood pressure monitors.

    PubMed

    Kallem, Radhakrishna R; Meyers, Kevin E C; Cucchiara, Andrew J; Sawinski, Deirdre L; Townsend, Raymond R

    2014-04-01

    There are no data on the evaluation of blood pressure (BP) variability comparing two ambulatory blood pressure monitoring monitors worn at the same time. Hence, this study was carried out to compare variability of BP in healthy untreated adults using two ambulatory BP monitors worn at the same time over an 8-h period. An Accutorr device was used to measure office BP in the dominant and nondominant arms of 24 participants.Simultaneous 8-h BP and heart rate data were measured in 24 untreated adult volunteers by Mobil-O-Graph (worn for an additional 16 h after removing the Spacelabs monitor) and Spacelabs with both random (N=12) and nonrandom (N=12) assignment of each device to the dominant arm. Average real variability (ARV), SD, coefficient of variation, and variation independent of mean were calculated for systolic blood pressure, diastolic blood pressure, mean arterial pressure, and pulse pressure (PP). Whether the Mobil-O-Graph was applied to the dominant or the nondominant arm, the ARV of mean systolic (P=0.003 nonrandomized; P=0.010 randomized) and PP (P=0.009 nonrandomized; P=0.005 randomized) remained significantly higher than the Spacelabs device, whereas the ARV of the mean arterial pressure was not significantly different. The average BP readings and ARVs for systolic blood pressure and PP obtained by the Mobil-O-Graph were considerably higher for the daytime than the night-time. Given the emerging interest in the effect of BP variability on health outcomes, the accuracy of its measurement is important. Our study raises concerns about the accuracy of pooling international ambulatory blood pressure monitoring variability data using different devices.

  5. Metabolomics variable selection and classification in the presence of observations below the detection limit using an extension of ERp.

    PubMed

    van Reenen, Mari; Westerhuis, Johan A; Reinecke, Carolus J; Venter, J Hendrik

    2017-02-02

    ERp is a variable selection and classification method for metabolomics data. ERp uses minimized classification error rates, based on data from a control and experimental group, to test the null hypothesis of no difference between the distributions of variables over the two groups. If the associated p-values are significant they indicate discriminatory variables (i.e. informative metabolites). The p-values are calculated assuming a common continuous strictly increasing cumulative distribution under the null hypothesis. This assumption is violated when zero-valued observations can occur with positive probability, a characteristic of GC-MS metabolomics data, disqualifying ERp in this context. This paper extends ERp to address two sources of zero-valued observations: (i) zeros reflecting the complete absence of a metabolite from a sample (true zeros); and (ii) zeros reflecting a measurement below the detection limit. This is achieved by allowing the null cumulative distribution function to take the form of a mixture between a jump at zero and a continuous strictly increasing function. The extended ERp approach is referred to as XERp. XERp is no longer non-parametric, but its null distributions depend only on one parameter, the true proportion of zeros. Under the null hypothesis this parameter can be estimated by the proportion of zeros in the available data. XERp is shown to perform well with regard to bias and power. To demonstrate the utility of XERp, it is applied to GC-MS data from a metabolomics study on tuberculosis meningitis in infants and children. We find that XERp is able to provide an informative shortlist of discriminatory variables, while attaining satisfactory classification accuracy for new subjects in a leave-one-out cross-validation context. XERp takes into account the distributional structure of data with a probability mass at zero without requiring any knowledge of the detection limit of the metabolomics platform. XERp is able to identify variables that discriminate between two groups by simultaneously extracting information from the difference in the proportion of zeros and shifts in the distributions of the non-zero observations. XERp uses simple rules to classify new subjects and a weight pair to adjust for unequal sample sizes or sensitivity and specificity requirements.

  6. New Quasar Surveys with WIRO: Data and Calibration for Studies of Variability

    NASA Astrophysics Data System (ADS)

    Lyke, Bradley; Bassett, Neil; Deam, Sophie; Dixon, Don; Griffith, Emily; Harvey, William; Lee, Daniel; Haze Nunez, Evan; Parziale, Ryan; Witherspoon, Catherine; Myers, Adam D.; Findlay, Joseph; Kobulnicky, Henry A.; Dale, Daniel A.

    2017-01-01

    Measurements of quasar variability offer the potential for understanding the physics of accretion processes around supermassive black holes. However, generating structure functions in order to characterize quasar variability can be observationally taxing as it requires imaging of quasars over a large variety of date ranges. To begin to address this problem, we have conducted an imaging survey of sections of Sloan Digital Sky Survey (SDSS) Stripe 82 at the Wyoming Infrared Observatory (WIRO). We used standard stars to calculate zero-point offsets between WIRO and SDSS observations in the urgiz magnitude system. After finding the zero-point offset, we accounted for further offsets by comparing standard star magnitudes in each WIRO frame to coadded magnitudes from Stripe 82 and applying a linear correction. Known (i.e. spectroscopically confirmed) quasars at the epoch we conducted WIRO observations (Summer, 2016) and at every epoch in SDSS Stripe 82 (~80 total dates) were hence calibrated to a similar magnitude system. The algorithm for this calibration compared 1500 randomly selected standard stars with an MJD within 0.07 of the MJD of each quasar of interest, for each of the five ugriz filters. Ultimately ~1000 known quasars in Stripe 82 were identified by WIRO and their SDSS-WIRO magnitudes were calibrated to a similar scale in order to generate ensemble structure functions.This work is supported by the National Science Foundation under REU grant AST 1560461.

  7. On estimating gravity anomalies: A comparison of least squares collocation with least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1976-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.

  8. Comparison of several maneuvering target tracking models

    NASA Astrophysics Data System (ADS)

    McIntyre, Gregory A.; Hintz, Kenneth J.

    1998-07-01

    The tracking of maneuvering targets is complicated by the fact that acceleration is not directly observable or measurable. Additionally, acceleration can be induced by a variety of sources including human input, autonomous guidance, or atmospheric disturbances. The approaches to tracking maneuvering targets can be divided into two categories both of which assume that the maneuver input command is unknown. One approach is to model the maneuver as a random process. The other approach assumes that the maneuver is not random and that it is either detected or estimated in real time. The random process models generally assume one of two statistical properties, either white noise or an autocorrelated noise. The multiple-model approach is generally used with the white noise model while a zero-mean, exponentially correlated acceleration approach is used with the autocorrelated noise model. The nonrandom approach uses maneuver detection to correct the state estimate or a variable dimension filter to augment the state estimate with an extra state component during a detected maneuver. Another issue with the tracking of maneuvering target is whether to perform the Kalman filter in Polar or Cartesian coordinates. This paper will examine and compare several exponentially correlated acceleration approaches in both Polar and Cartesian coordinates for accuracy and computational complexity. They include the Singer model in both Polar and Cartesian coordinates, the Singer model in Polar coordinates converted to Cartesian coordinates, Helferty's third order rational approximation of the Singer model and the Bar-Shalom and Fortmann model. This paper shows that these models all provide very accurate position estimates with only minor differences in velocity estimates and compares the computational complexity of the models.

  9. Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.

    PubMed

    Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan

    2018-01-12

    Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).

  10. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  11. Even and odd normalized zero modes in random interacting Majorana models respecting the parity P and the time-reversal-symmetry T

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile

    2018-06-01

    For random interacting Majorana models where the only symmetries are the parity P and the time-reversal-symmetry T, various approaches are compared to construct exact even and odd normalized zero modes Γ in finite size, i.e. Hermitian operators that commute with the Hamiltonian, that square to the identity, and that commute (even) or anticommute (odd) with the parity P. Even normalized zero-modes are well known under the name of ‘pseudo-spins’ in the field of many-body-localization or more precisely ‘local integrals of motion’ (LIOMs) in the many-body-localized-phase where the pseudo-spins happens to be spatially localized. Odd normalized zero-modes are popular under the name of ‘Majorana zero modes’ or ‘strong zero modes’. Explicit examples for small systems are described in detail. Applications to real-space renormalization procedures based on blocks containing an odd number of Majorana fermions are also discussed.

  12. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  13. Properties of Zero-Free Transfer Function Matrices

    NASA Astrophysics Data System (ADS)

    D. O. Anderson, Brian; Deistler, Manfred

    Transfer functions of linear, time-invariant finite-dimensional systems with more outputs than inputs, as arise in factor analysis (for example in econometrics), have, for state-variable descriptions with generic entries in the relevant matrices, no finite zeros. This paper gives a number of characterizations of such systems (and indeed square discrete-time systems with no zeros), using state-variable, impulse response, and matrix-fraction descriptions. Key properties include the ability to recover the input values at any time from a bounded interval of output values, without any knowledge of an initial state, and an ability to verify the no-zero property in terms of a property of the impulse response coefficient matrices. Results are particularized to cases where the transfer function matrix in question may or may not have a zero at infinity or a zero at zero.

  14. Estimating cavity tree and snag abundance using negative binomial regression models and nearest neighbor imputation methods

    Treesearch

    Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett

    2009-01-01

    Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....

  15. Sensitivity study of the monogroove with screen heat pipe design

    NASA Technical Reports Server (NTRS)

    Evans, Austin L.; Joyce, Martin

    1988-01-01

    The present sensitivity study of design variable effects on the performance of a monogroove-with-screen heat pipe obtains performance curves for maximum heat-transfer rates vs. operating temperatures by means of a computer code; performance projections for both 1-g and zero-g conditions are obtainable. The variables in question were liquid and vapor channel design, wall groove design, and the number of feed lines in the evaporator and condenser. The effect on performance of three different working fluids, namely ammonia, methanol, and water, were also determined. Greatest sensitivity was to changes in liquid and vapor channel diameters.

  16. Quantum interference magnetoconductance of polycrystalline germanium films in the variable-range hopping regime

    NASA Astrophysics Data System (ADS)

    Li, Zhaoguo; Peng, Liping; Zhang, Jicheng; Li, Jia; Zeng, Yong; Zhan, Zhiqiang; Wu, Weidong

    2018-06-01

    Direct evidence of quantum interference magnetotransport in polycrystalline germanium films in the variable-range hopping (VRH) regime is reported. The temperature dependence of the conductivity of germanium films fulfilled the Mott VRH mechanism with the form of ? in the low-temperature regime (?). For the magnetotransport behaviour of our germanium films in the VRH regime, a crossover, from negative magnetoconductance at the low-field to positive magnetoconductance at the high-field, is observed while the zero-field conductivity is higher than the critical value (?). In the regime of ?, the magnetoconductance is positive and quadratic in the field for some germanium films. These features are in agreement with the VRH magnetotransport theory based on the quantum interference effect among random paths in the hopping process.

  17. Perturbed effects at radiation physics

    NASA Astrophysics Data System (ADS)

    Külahcı, Fatih; Şen, Zekâi

    2013-09-01

    Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.

  18. Zero-inflated spatio-temporal models for disease mapping.

    PubMed

    Torabi, Mahmoud

    2017-05-01

    In this paper, our aim is to analyze geographical and temporal variability of disease incidence when spatio-temporal count data have excess zeros. To that end, we consider random effects in zero-inflated Poisson models to investigate geographical and temporal patterns of disease incidence. Spatio-temporal models that employ conditionally autoregressive smoothing across the spatial dimension and B-spline smoothing over the temporal dimension are proposed. The analysis of these complex models is computationally difficult from the frequentist perspective. On the other hand, the advent of the Markov chain Monte Carlo algorithm has made the Bayesian analysis of complex models computationally convenient. Recently developed data cloning method provides a frequentist approach to mixed models that is also computationally convenient. We propose to use data cloning, which yields to maximum likelihood estimation, to conduct frequentist analysis of zero-inflated spatio-temporal modeling of disease incidence. One of the advantages of the data cloning approach is that the prediction and corresponding standard errors (or prediction intervals) of smoothing disease incidence over space and time is easily obtained. We illustrate our approach using a real dataset of monthly children asthma visits to hospital in the province of Manitoba, Canada, during the period April 2006 to March 2010. Performance of our approach is also evaluated through a simulation study. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  20. Variable speed wind turbine generator with zero-sequence filter

    DOEpatents

    Muljadi, Eduard

    1998-01-01

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.

  1. Variable Speed Wind Turbine Generator with Zero-sequence Filter

    DOEpatents

    Muljadi, Eduard

    1998-08-25

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.

  2. Variable speed wind turbine generator with zero-sequence filter

    DOEpatents

    Muljadi, E.

    1998-08-25

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility. 14 figs.

  3. Generated effect modifiers (GEM's) in randomized clinical trials.

    PubMed

    Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R Todd

    2017-01-01

    In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an "effect modifier". Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Cutoff for the East Process

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Lubetzky, E.; Martinelli, F.

    2015-05-01

    The East process is a 1 d kinetically constrained interacting particle system, introduced in the physics literature in the early 1990s to model liquid-glass transitions. Spectral gap estimates of Aldous and Diaconis in 2002 imply that its mixing time on L sites has order L. We complement that result and show cutoff with an -window. The main ingredient is an analysis of the front of the process (its rightmost zero in the setup where zeros facilitate updates to their right). One expects the front to advance as a biased random walk, whose normal fluctuations would imply cutoff with an -window. The law of the process behind the front plays a crucial role: Blondel showed that it converges to an invariant measure ν, on which very little is known. Here we obtain quantitative bounds on the speed of convergence to ν, finding that it is exponentially fast. We then derive that the increments of the front behave as a stationary mixing sequence of random variables, and a Stein-method based argument of Bolthausen (`82) implies a CLT for the location of the front, yielding the cutoff result. Finally, we supplement these results by a study of analogous kinetically constrained models on trees, again establishing cutoff, yet this time with an O(1)-window.

  5. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  6. Visualizing Time-Varying Distribution Data in EOS Application

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei

    2004-01-01

    In this research, we have developed several novel visualization methods for spatial probability density function data. Our focus has been on 2D spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We developed novel clustering algorithms as a means to reduce the information contained in these datasets; and investigated different ways of interpreting and clustering the data.

  7. A statistical model for analyzing the rotational error of single isocenter for multiple targets technique.

    PubMed

    Chang, Jenghwa

    2017-06-01

    To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.

  8. Hurdle models for multilevel zero-inflated data via h-likelihood.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2010-12-30

    Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.

  9. Image discrimination models predict detection in fixed but not random noise

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)

    1997-01-01

    By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.

  10. Collective relaxation dynamics of small-world networks

    NASA Astrophysics Data System (ADS)

    Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc

    2015-05-01

    Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N , average degree k , and topological randomness q . We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q , including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.

  11. Collective relaxation dynamics of small-world networks.

    PubMed

    Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc

    2015-05-01

    Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N, average degree k, and topological randomness q. We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q, including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.

  12. A random matrix approach to credit risk.

    PubMed

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  13. A Random Matrix Approach to Credit Risk

    PubMed Central

    Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864

  14. A Comparison of Zero-Profile Devices and Artificial Cervical Disks in Patients With 2 Noncontiguous Levels of Cervical Spondylosis.

    PubMed

    Qizhi, Sun; Lei, Sun; Peijia, Li; Hanping, Zhao; Hongwei, Hu; Junsheng, Chen; Jianmin, Li

    2016-03-01

    A prospective randomized and controlled study of 30 patients with 2 noncontiguous levels of cervical spondylosis. To compare the clinical outcome between zero-profile devices and artificial cervical disks for noncontiguous cervical spondylosis. Noncontiguous cervical spondylosis is an especial degenerative disease of the cervical spine. Some controversy exists over the choice of surgical procedure and fusion levels for it because of the viewpoint that the stress at levels adjacent to a fusion mass will increase. The increased stress will lead to the adjacent segment degeneration (ASD). According to the viewpoint, the intermediate segment will bear more stress after both superior and inferior segments' fusion. Cervical disk arthroplasty is an alternative to fusion because of its motion-preserving. Few comparative studies have been conducted on arthrodesis with zero-prolife devices and arthroplasty with artificial cervical disks for noncontiguous cervical spondylosis. Thirty patients with 2 noncontiguous levels of cervical spondylosis were enrolled and assigned to either group A (receiving arthroplasty using artificial cervical disks) and group Z (receiving arthrodesis using zero-profile devices). The clinical outcomes were assessed by the mean operative time, blood loss, Japanese Orthopedic Association (JOA) score, Neck Dysfunction Index (NDI), cervical lordosis, fusion rate, and complications. The mean follow-up was 32.4 months. There were no significant differences between the 2 groups in the blood loss, JOA score, NDI score, and cervical lordosis except operative time. The mean operative time of group A was shorter than that of group Z. Both the 2 groups demonstrated a significant increase in JOA score, NDI score, and cervical lordosis. The fusion rate was 100% at 12 months postoperatively in group Z. There was no significant difference between the 2 groups in complications except the ASD. Three patients had radiologic ASD at the final follow-up in group Z, and none in group A. Both zero-prolife devices and artificial cervical disks are generally effective and safe in the treatment of 2 noncontiguous levels of cervical spondylosis. However, in view of occurrence of the radiologic ASD and operative time, we prefer to artificial cervical disks if indications are well controlled.

  15. Simulating Local Area Network Protocols with the General Purpose Simulation System (GPSS)

    DTIC Science & Technology

    1990-03-01

    generation 15 3.1.2 Frame delivery . 15 3.2 Model artifices 16 3.3 Model variables 17 3.4 Simulation results 18 4. EXTERNAL PROCEDURES USED IN SIMULATION 19...46 15. Token Ring: Frame generation process 47 16. Token Ring: Frame delivery process 48 17 . Token Ring: Mean transfer delay vs mean throughput 49...assumed to be zero were replaced by the maximum values specified in the ANSI 802.3 standard (viz &MI=6, &M2=3, &M3= 17 , &D1=18, &D2=3, &D4=4, &D7=3, and

  16. Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.

    PubMed

    Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J

    2017-06-01

    Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.

  17. Analysis of security of optical encryption with spatially incoherent illumination technique

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Shifrina, Anna V.

    2017-03-01

    Applications of optical methods for encryption purposes have been attracting interest of researchers for decades. The first and the most popular is double random phase encoding (DRPE) technique. There are many optical encryption techniques based on DRPE. Main advantage of DRPE based techniques is high security due to transformation of spectrum of image to be encrypted into white spectrum via use of first phase random mask which allows for encrypted images with white spectra. Downsides are necessity of using holographic registration scheme in order to register not only light intensity distribution but also its phase distribution, and speckle noise occurring due to coherent illumination. Elimination of these disadvantages is possible via usage of incoherent illumination instead of coherent one. In this case, phase registration no longer matters, which means that there is no need for holographic setup, and speckle noise is gone. This technique does not have drawbacks inherent to coherent methods, however, as only light intensity distribution is considered, mean value of image to be encrypted is always above zero which leads to intensive zero spatial frequency peak in image spectrum. Consequently, in case of spatially incoherent illumination, image spectrum, as well as encryption key spectrum, cannot be white. This might be used to crack encryption system. If encryption key is very sparse, encrypted image might contain parts or even whole unhidden original image. Therefore, in this paper analysis of security of optical encryption with spatially incoherent illumination depending on encryption key size and density is conducted.

  18. Experimental investigation of clogging dynamics in homogeneous porous medium

    NASA Astrophysics Data System (ADS)

    Shen, Jikang; Ni, Rui

    2017-03-01

    A 3-D refractive-index matching Lagrangian particle tracking (3D-RIM-LPT) system was developed to study the filtration and the clogging process inside a homogeneous porous medium. A small subset of particles flowing through the porous medium was dyed and tracked. As this subset was randomly chosen, its dynamics is representative of all the rest. The statistics of particle locations, number, and velocity were obtained as functions of different volumetric concentrations. It is found that in our system the clogging time decays with the particle concentration following a power law relationship. As the concentration increases, there is a transition from depth filtration to cake filtration. At high concentration, more clogged pores lead to frequent flow redirections and more transverse migrations of particles. In addition, the velocity distribution in the transverse direction is symmetrical around zero, and it is slightly more intermittent than the random Gaussian curve due to particle-particle and particle-grain interactions. In contrast, as clogging develops, the longitudinal velocity of particles along the mean flow direction peaks near zero because of many trapped particles. But at the same time, the remaining open pores will experience larger pressure and, as a result, particles through those pores tend to have larger longitudinal velocities.

  19. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Unpolarized emissivity with shadow and multiple reflections from random rough surfaces with the geometric optics approximation: application to Gaussian sea surfaces in the infrared band.

    PubMed

    Bourlier, Christophe

    2006-08-20

    The emissivity from a stationary random rough surface is derived by taking into account the multiple reflections and the shadowing effect. The model is applied to the ocean surface. The geometric optics approximation is assumed to be valid, which means that the rough surface is modeled as a collection of facets reflecting locally the light in the specular direction. In particular, the emissivity with zero, single, and double reflections are analytically calculated, and each contribution is studied numerically by considering a 1D sea surface observed in the near infrared band. The model is also compared with results computed from a Monte Carlo ray-tracing method.

  1. Time-variant random interval natural frequency analysis of structures

    NASA Astrophysics Data System (ADS)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  2. A review on models for count data with extra zeros

    NASA Astrophysics Data System (ADS)

    Zamri, Nik Sarah Nik; Zamzuri, Zamira Hasanah

    2017-04-01

    Typically, the zero inflated models are usually used in modelling count data with excess zeros. The existence of the extra zeros could be structural zeros or random which occur by chance. These types of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences. As found in the literature, the most popular zero inflated models used are zero inflated Poisson and zero inflated negative binomial. Recently, more complex models have been developed to account for overdispersion and unobserved heterogeneity. In addition, more extended distributions are also considered in modelling data with this feature. In this paper, we review related literature, provide a recent development and summary on models for count data with extra zeros.

  3. Variable selection for distribution-free models for longitudinal zero-inflated count responses.

    PubMed

    Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M

    2016-07-20

    Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Novel Zero-Heat-Flux Deep Body Temperature Measurement in Lower Extremity Vascular and Cardiac Surgery.

    PubMed

    Mäkinen, Marja-Tellervo; Pesonen, Anne; Jousela, Irma; Päivärinta, Janne; Poikajärvi, Satu; Albäck, Anders; Salminen, Ulla-Stina; Pesonen, Eero

    2016-08-01

    The aim of this study was to compare deep body temperature obtained using a novel noninvasive continuous zero-heat-flux temperature measurement system with core temperatures obtained using conventional methods. A prospective, observational study. Operating room of a university hospital. The study comprised 15 patients undergoing vascular surgery of the lower extremities and 15 patients undergoing cardiac surgery with cardiopulmonary bypass. Zero-heat-flux thermometry on the forehead and standard core temperature measurements. Body temperature was measured using a new thermometry system (SpotOn; 3M, St. Paul, MN) on the forehead and with conventional methods in the esophagus during vascular surgery (n = 15), and in the nasopharynx and pulmonary artery during cardiac surgery (n = 15). The agreement between SpotOn and the conventional methods was assessed using the Bland-Altman random-effects approach for repeated measures. The mean difference between SpotOn and the esophageal temperature during vascular surgery was+0.08°C (95% limit of agreement -0.25 to+0.40°C). During cardiac surgery, during off CPB, the mean difference between SpotOn and the pulmonary arterial temperature was -0.05°C (95% limits of agreement -0.56 to+0.47°C). Throughout cardiac surgery (on and off CPB), the mean difference between SpotOn and the nasopharyngeal temperature was -0.12°C (95% limits of agreement -0.94 to+0.71°C). Poor agreement between the SpotOn and nasopharyngeal temperatures was detected in hypothermia below approximately 32°C. According to this preliminary study, the deep body temperature measured using the zero-heat-flux system was in good agreement with standard core temperatures during lower extremity vascular and cardiac surgery. However, agreement was questionable during hypothermia below 32°C. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. The Geometric Phase of Stock Trading.

    PubMed

    Altafini, Claudio

    2016-01-01

    Geometric phases describe how in a continuous-time dynamical system the displacement of a variable (called phase variable) can be related to other variables (shape variables) undergoing a cyclic motion, according to an area rule. The aim of this paper is to show that geometric phases can exist also for discrete-time systems, and even when the cycles in shape space have zero area. A context in which this principle can be applied is stock trading. A zero-area cycle in shape space represents the type of trading operations normally carried out by high-frequency traders (entering and exiting a position on a fast time-scale), while the phase variable represents the cash balance of a trader. Under the assumption that trading impacts stock prices, even zero-area cyclic trading operations can induce geometric phases, i.e., profits or losses, without affecting the stock quote.

  6. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  7. Atherosclerotic Plaque in Patients with Zero Calcium Score at Coronary Computed Tomography Angiography.

    PubMed

    Gabriel, Fabíola Santos; Gonçalves, Luiz Flávio Galvão; Melo, Enaldo Vieira de; Sousa, Antônio Carlos Sobral; Pinto, Ibraim Masciarelli Francisco; Santana, Sara Melo Macedo; Matos, Carlos José Oliveira de; Souto, Maria Júlia Silveira; Conceição, Flávio Mateus do Sacramento; Oliveira, Joselina Luzia Menezes

    2018-05-03

    In view of the high mortality for cardiovascular diseases, it has become necessary to stratify the main risk factors and to choose the correct diagnostic modality. Studies have demonstrated that a zero calcium score (CS) is characteristic of a low risk for cardiovascular events. However, the prevalence of individuals with coronary atherosclerotic plaques and zero CS is conflicting in the specialized literature. To evaluate the frequency of patients with coronary atherosclerotic plaques, their degree of obstruction and associated factors in patients with zero CS and indication for coronary computed tomography angiography (CCTA). This is a cross-sectional, prospective study with 367 volunteers with zero CS at CCTA in four diagnostic imaging centers in the period from 2011 to 2016. A significance level of 5% and 95% confidence interval were adopted. The frequency of atherosclerotic plaque in the coronary arteries in 367 patients with zero CS was 9.3% (34 individuals). In this subgroup, mean age was 52 ± 10 years, 18 (52.9%) were women and 16 (47%) had significant coronary obstructions (> 50%), with involvement of two or more segments in 4 (25%) patients. The frequency of non-obese individuals (90.6% vs 73.9%, p = 0.037) and alcohol drinkers (55.9% vs 34.8%, p = 0.015) was significantly higher in patients with atherosclerotic plaques, with an odds ratio of 3.4 for each of this variable. The frequency of atherosclerotic plaque with zero CS was relatively high, indicating that the absence of calcification does not exclude the presence of plaques, many of which obstructive, especially in non-obese subjects and alcohol drinkers.

  8. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  9. Random Effects: Variance Is the Spice of Life.

    PubMed

    Jupiter, Daniel C

    Covariates in regression analyses allow us to understand how independent variables of interest impact our dependent outcome variable. Often, we consider fixed effects covariates (e.g., gender or diabetes status) for which we examine subjects at each value of the covariate. We examine both men and women and, within each gender, examine both diabetic and nondiabetic patients. Occasionally, however, we consider random effects covariates for which we do not examine subjects at every value. For example, we examine patients from only a sample of hospitals and, within each hospital, examine both diabetic and nondiabetic patients. The random sampling of hospitals is in contrast to the complete coverage of all genders. In this column I explore the differences in meaning and analysis when thinking about fixed and random effects variables. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Resonant paramagnetic enhancement of the thermal and zero-point Nyquist noise

    NASA Astrophysics Data System (ADS)

    França, H. M.; Santos, R. B. B.

    1999-01-01

    The interaction between a very thin macroscopic solenoid, and a single magnetic particle precessing in a external magnetic field B0, is described by taking into account the thermal and the zero-point fluctuations of stochastic electrodynamics. The inductor belongs to a RLC circuit without batteries and the random motion of the magnetic dipole generates in the solenoid a fluctuating current Idip( t), and a fluctuating voltage εdip( t), with spectral distribution quite different from the Nyquist noise. We show that the mean square value < Idip2> presents an enormous variation when the frequency of precession approaches the frequency of the circuit, but it is still much smaller than the Nyquist current in the circuit. However, we also show that < Idip2> can reach measurable values if the inductor is interacting with a macroscopic sample of magnetic particles (atoms or nuclei) which are close enough to its coils.

  11. Quantum Hall Effect near the Charge Neutrality Point in a Two-Dimensional Electron-Hole System

    NASA Astrophysics Data System (ADS)

    Gusev, G. M.; Olshanetsky, E. B.; Kvon, Z. D.; Mikhailov, N. N.; Dvoretsky, S. A.; Portal, J. C.

    2010-04-01

    We study the transport properties of HgTe-based quantum wells containing simultaneously electrons and holes in a magnetic field B. At the charge neutrality point (CNP) with nearly equal electron and hole densities, the resistance is found to increase very strongly with B while the Hall resistivity turns to zero. This behavior results in a wide plateau in the Hall conductivity σxy≈0 and in a minimum of diagonal conductivity σxx at ν=νp-νn=0, where νn and νp are the electron and hole Landau level filling factors. We suggest that the transport at the CNP point is determined by electron-hole “snake states” propagating along the ν=0 lines. Our observations are qualitatively similar to the quantum Hall effect in graphene as well as to the transport in a random magnetic field with a zero mean value.

  12. Microstructure from ferroelastic transitions using strain pseudospin clock models in two and three dimensions: A local mean-field analysis

    NASA Astrophysics Data System (ADS)

    Vasseur, Romain; Lookman, Turab; Shenoy, Subodh R.

    2010-09-01

    We show how microstructure can arise in first-order ferroelastic structural transitions, in two and three spatial dimensions, through a local mean-field approximation of their pseudospin Hamiltonians, that include anisotropic elastic interactions. Such transitions have symmetry-selected physical strains as their NOP -component order parameters, with Landau free energies that have a single zero-strain “austenite” minimum at high temperatures, and spontaneous-strain “martensite” minima of NV structural variants at low temperatures. The total free energy also has gradient terms, and power-law anisotropic effective interactions, induced by “no-dislocation” St Venant compatibility constraints. In a reduced description, the strains at Landau minima induce temperature dependent, clocklike ZNV+1 Hamiltonians, with NOP -component strain-pseudospin vectors S⃗ pointing to NV+1 discrete values (including zero). We study elastic texturing in five such first-order structural transitions through a local mean-field approximation of their pseudospin Hamiltonians, that include the power-law interactions. As a prototype, we consider the two-variant square/rectangle transition, with a one-component pseudospin taking NV+1=3 values of S=0,±1 , as in a generalized Blume-Capel model. We then consider transitions with two-component (NOP=2) pseudospins: the equilateral to centered rectangle (NV=3) ; the square to oblique polygon (NV=4) ; the triangle to oblique (NV=6) transitions; and finally the three-dimensional (3D) cubic to tetragonal transition (NV=3) . The local mean-field solutions in two-dimensional and 3D yield oriented domain-wall patterns as from continuous-variable strain dynamics, showing the discrete-variable models capture the essential ferroelastic texturings. Other related Hamiltonians illustrate that structural transitions in materials science can be the source of interesting spin models in statistical mechanics.

  13. Detecting anomalies in CMB maps: a new method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neelakanta, Jayanth T., E-mail: jayanthtn@gmail.com

    2015-10-01

    Ever since WMAP announced its first results, different analyses have shown that there is weak evidence for several large-scale anomalies in the CMB data. While the evidence for each anomaly appears to be weak, the fact that there are multiple seemingly unrelated anomalies makes it difficult to account for them via a single statistical fluke. So, one is led to considering a combination of these anomalies. But, if we ''hand-pick'' the anomalies (test statistics) to consider, we are making an a posteriori choice. In this article, we propose two statistics that do not suffer from this problem. The statistics aremore » linear and quadratic combinations of the a{sub ℓ m}'s with random co-efficients, and they test the null hypothesis that the a{sub ℓ m}'s are independent, normally-distributed, zero-mean random variables with an m-independent variance. The motivation for considering multiple modes is this: because most physical models that lead to large-scale anomalies result in coupling multiple ℓ and m modes, the ''coherence'' of this coupling should get enhanced if a combination of different modes is considered. In this sense, the statistics are thus much more generic than those that have been hitherto considered in literature. Using fiducial data, we demonstrate that the method works and discuss how it can be used with actual CMB data to make quite general statements about the incompatibility of the data with the null hypothesis.« less

  14. Temporal changes in randomness of bird communities across Central Europe.

    PubMed

    Renner, Swen C; Gossner, Martin M; Kahl, Tiemo; Kalko, Elisabeth K V; Weisser, Wolfgang W; Fischer, Markus; Allan, Eric

    2014-01-01

    Many studies have examined whether communities are structured by random or deterministic processes, and both are likely to play a role, but relatively few studies have attempted to quantify the degree of randomness in species composition. We quantified, for the first time, the degree of randomness in forest bird communities based on an analysis of spatial autocorrelation in three regions of Germany. The compositional dissimilarity between pairs of forest patches was regressed against the distance between them. We then calculated the y-intercept of the curve, i.e. the 'nugget', which represents the compositional dissimilarity at zero spatial distance. We therefore assume, following similar work on plant communities, that this represents the degree of randomness in species composition. We then analysed how the degree of randomness in community composition varied over time and with forest management intensity, which we expected to reduce the importance of random processes by increasing the strength of environmental drivers. We found that a high portion of the bird community composition could be explained by chance (overall mean of 0.63), implying that most of the variation in local bird community composition is driven by stochastic processes. Forest management intensity did not consistently affect the mean degree of randomness in community composition, perhaps because the bird communities were relatively insensitive to management intensity. We found a high temporal variation in the degree of randomness, which may indicate temporal variation in assembly processes and in the importance of key environmental drivers. We conclude that the degree of randomness in community composition should be considered in bird community studies, and the high values we find may indicate that bird community composition is relatively hard to predict at the regional scale.

  15. Biological monitoring of environmental quality: The use of developmental instability

    USGS Publications Warehouse

    Freeman, D.C.; Emlen, J.M.; Graham, J.H.; Hough, R. A.; Bannon, T.A.

    1994-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.

  16. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    PubMed Central

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  17. Piloting Changes to Changing Aircraft Dynamics: What Do Pilots Need to Know?

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Gregory, Irene M.

    2011-01-01

    An experiment was conducted to quantify the effects of changing dynamics on a subject s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. The data will be used to identify primary aircraft dynamics variables that influence changes in pilot s response and produce a simplified pilot model that incorporates this relationship. Each run incorporated a different set of second-order aircraft dynamics representing short period transfer function pitch attitude response: damping ratio, frequency, gain, zero location, and time delay. The subject s ability to conduct the tracking task was the greatest source of root mean square error tracking variability. As for the aircraft dynamics, the factors that affected the subjects ability to conduct the tracking were the time delay, frequency, and zero location. In addition to creating a simplified pilot model, the results of the experiment can be utilized in an advisory capacity. A situation awareness/prediction aid based on the pilot behavior and aircraft dynamics may help tailor pilot s inputs more quickly so that PIO or an upset condition can be avoided.

  18. Time to rehabilitation in the burn population: incidence of zero onset days in the UDSMR national dataset.

    PubMed

    Schneider, Jeffrey C; Tan, Wei-Han; Goldstein, Richard; Mix, Jacqueline M; Niewczyk, Paulette; Divita, Margaret A; Ryan, Colleen M; Gerrard, Paul B; Kowalske, Karen; Zafonte, Ross

    2013-01-01

    A preliminary investigation of the burn rehabilitation population found a large variability of zero onset day frequency between facilities. Onset days is defined as the time from injury to inpatient rehabilitation admission; this variable has not been investigated in burn patients previously. This study explored if this finding was a facility-based phenomena or characteristic of burn inpatient rehabilitation patients. This study was a secondary analysis of Uniform Data System for Medical Rehabilitation (UDSmr) data from 2002 to 2007 examining inpatient rehabilitation characteristics among patients with burn injuries. Exclusion criteria were age less than 18 years and discharge against medical advice. Comparisons of demographic, medical and functional data were made between facilities with a high frequency of zero onset days versus facilities with a low frequency of zero onset days. A total of 4738 patients from 455 inpatient rehabilitation facilities were included. Twenty-three percent of the population exhibited zero onset days (n = 1103). Sixteen facilities contained zero onset patients; two facilities accounted for 97% of the zero onset subgroup. Facilities with a high frequency of zero onset day patients demonstrated significant differences in demographic, medical, and functional variables compared to the remainder of the study population. There were significantly more zero onset day admissions among burn patients (23%) than other diagnostic groups (0.5- 3.6%) in the Uniform Data System for Medical Rehabilitation database, but the majority (97%) came from two inpatient rehabilitation facilities. It is unexpected for patients with significant burn injury to be admitted to a rehabilitation facility on the day of injury. Future studies investigating burn rehabilitation outcomes using the Uniform Data System for Medical Rehabilitation database should exclude facilities with a high percentage of zero onset days, which are not representative of the burn inpatient rehabilitation population.

  19. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in Shortwave Radiative Transfer: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Buldyrev, S.; Davis, A.; Marshak, A.; Stanley, H. E.

    2001-12-01

    Two-stream radiation transport models, as used in all current GCM parameterization schemes, are mathematically equivalent to ``standard'' diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. The space/time spread (technically, the Green function) of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows directly from first principles (the radiative transfer equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the ``1-g'' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as ``anomalous'' diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics literature to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of Lévy/anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM radiation parameterization.

  20. Spinodals with Disorder: From Avalanches in Random Magnets to Glassy Dynamics

    NASA Astrophysics Data System (ADS)

    Nandi, Saroj Kumar; Biroli, Giulio; Tarjus, Gilles

    2016-04-01

    We revisit the phenomenon of spinodals in the presence of quenched disorder and develop a complete theory for it. We focus on the spinodal of an Ising model in a quenched random field (RFIM), which has applications in many areas from materials to social science. By working at zero temperature in the quasistatically driven RFIM, thermal fluctuations are eliminated and one can give a rigorous content to the notion of spinodal. We show that the latter is due to the depinning and the subsequent expansion of rare droplets. We work out the associated critical behavior, which, in any finite dimension, is very different from the mean-field one: the characteristic length diverges exponentially and the thermodynamic quantities display very mild nonanalyticities much like in a Griffith phenomenon. From the recently established connection between the spinodal of the RFIM and glassy dynamics, our results also allow us to conclusively assess the physical content and the status of the dynamical transition predicted by the mean-field theory of glass-forming liquids.

  1. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  2. A comparison of extreme rainfall characteristics in the Brazilian Amazon derived from two gridded data sets and a national rain gauge network

    NASA Astrophysics Data System (ADS)

    Clarke, Robin T.; Bulhoes Mendes, Carlos Andre; Costa Buarque, Diogo

    2010-07-01

    Two issues of particular importance for the Amazon watershed are: whether annual maxima obtained from reanalysis and raingauge records agree well enough for the former to be useful in extending records of the latter; and whether reported trends in Amazon annual rainfall are reflected in the behavior of annual extremes in precipitation estimated from reanalyses and raingauge records. To explore these issues, three sets of daily precipitation data (1979-2001) from the Brazilian Amazon were analyzed (NCEP/NCAR and ERA-40 reanalyses, and records from the raingauge network of the Brazilian water resources agency - ANA), using the following variables: (1) mean annual maximum precipitation totals, accumulated over one, two, three and five days; (2) linear trends in these variables; (3) mean length of longest within-year "dry" spell; (4) linear trends in these variables. Comparisons between variables obtained from all three data sources showed that reanalyses underestimated time-trends and mean annual maximum precipitation (over durations of one to five days), and the correlations between reanalysis and spatially-interpolated raingauge estimates were small for these two variables. Both reanalyses over-estimated mean lengths of dry period relative to the mean length recorded by the raingauge network. Correlations between the trends calculated from all three data sources were small. Time-trends averaged over the reanalysis grid-squares, and spatially-interpolated time trends from raingauge data, were all clustered around zero. In conclusion, although the NCEP/NCAR and ERA-40 gridded data-sets may be valuable for studies of inter-annual variability in precipitation totals, they were found to be inappropriate for analysis of precipitation extremes.

  3. The Geometric Phase of Stock Trading

    PubMed Central

    2016-01-01

    Geometric phases describe how in a continuous-time dynamical system the displacement of a variable (called phase variable) can be related to other variables (shape variables) undergoing a cyclic motion, according to an area rule. The aim of this paper is to show that geometric phases can exist also for discrete-time systems, and even when the cycles in shape space have zero area. A context in which this principle can be applied is stock trading. A zero-area cycle in shape space represents the type of trading operations normally carried out by high-frequency traders (entering and exiting a position on a fast time-scale), while the phase variable represents the cash balance of a trader. Under the assumption that trading impacts stock prices, even zero-area cyclic trading operations can induce geometric phases, i.e., profits or losses, without affecting the stock quote. PMID:27556642

  4. Experimental demonstration of localized Brillouin gratings with low off-peak reflectivity established by perfect Golomb codes.

    PubMed

    Antman, Yair; Yaron, Lior; Langer, Tomi; Tur, Moshe; Levanon, Nadav; Zadok, Avi

    2013-11-15

    Dynamic Brillouin gratings (DBGs), inscribed by comodulating two writing pump waves with a perfect Golomb code, are demonstrated and characterized experimentally. Compared with pseudo-random bit sequence (PRBS) modulation of the pump waves, the Golomb code provides lower off-peak reflectivity due to the unique properties of its cyclic autocorrelation function. Golomb-coded DBGs allow the long variable delay of one-time probe waveforms with higher signal-to-noise ratios, and without averaging. As an example, the variable delay of return-to-zero, on-off keyed data at a 1 Gbit/s rate, by as much as 10 ns, is demonstrated successfully. The eye diagram of the reflected waveform remains open, whereas PRBS modulation of the pump waves results in a closed eye. The variable delay of data at 2.5 Gbit/s is reported as well, with a marginally open eye diagram. The experimental results are in good agreement with simulations.

  5. Effectiveness Trial of Community-Based I Choose Life-Africa Human Immunodeficiency Virus Prevention Program in Kenya

    PubMed Central

    Adam, Mary B.

    2014-01-01

    We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. PMID:24957544

  6. Effectiveness trial of community-based I Choose Life-Africa human immunodeficiency virus prevention program in Kenya.

    PubMed

    Adam, Mary B

    2014-09-01

    We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. © The American Society of Tropical Medicine and Hygiene.

  7. Streamflow characteristics and trends along Soldier Creek, Northeast Kansas

    USGS Publications Warehouse

    Juracek, Kyle E.

    2017-08-16

    Historical data for six selected U.S. Geological Survey streamgages along Soldier Creek in northeast Kansas were used in an assessment of streamflow characteristics and trends. This information is required by the Prairie Band Potawatomi Nation for the effective management of tribal water resources, including drought contingency planning. Streamflow data for the period of record at each streamgage were used to assess annual mean streamflow, annual mean base flow, mean monthly flow, annual peak flow, and annual minimum flow.Annual mean streamflows along Soldier Creek were characterized by substantial year-to-year variability with no pronounced long-term trends. On average, annual mean base flow accounted for about 20 percent of annual mean streamflow. Mean monthly flows followed a general seasonal pattern that included peak values in spring and low values in winter. Annual peak flows, which were characterized by considerable year-to-year variability, were most likely to occur in May and June and least likely to occur during November through February. With the exception of a weak yet statistically significant increasing trend at the Soldier Creek near Topeka, Kansas, streamgage, there were no pronounced long-term trends in annual peak flows. Annual 1-day, 30-day, and 90-day mean minimum flows were characterized by considerable year-to-year variability with no pronounced long-term trend. During an extreme drought, as was the case in the mid-1950s, there may be zero flow in Soldier Creek continuously for a period of one to several months.

  8. Trends in the Vertical Distribution of Ozone: A Comparison of Two Analyses of Ozonesonde Data

    NASA Technical Reports Server (NTRS)

    Loogan, J. A.; Megretskaia, I. A.; Miller, A. J.; Tiao, G. C.; Choi, D.; Zhang, L.; Bishop, L.; Stolarski, R.; Labow, G. J.; Hollandsworth, S. M.; hide

    1998-01-01

    We present the results of two independent analyses of ozonesonde measurements of the vertical profile of ozone. For most of the ozonesonde stations we use data that were recently reprocessed and reevaluated to improve their quality and internal consistency. The two analyses give similar results for trends in ozone. We attribute differences in results primarily to differences in data selection criteria and in utilization of data correction factors, rather than in statistical trend models. We find significant decreases in stratospheric ozone at all stations in middle and high latitudes of the northern hemisphere from 1970 to 1996, with the largest decreases located between 12 and 21 km, and trends of -3 to -10 %/decade near 17 km. The decreases are largest at the Canadian and the most northerly Japanese station, and are smallest at the European stations, and at Wallops Island, U.S.A. The mean mid-latitude trend is largest, -7 %/decade, from 12 to 17.5 km for 1970-96. For 1980-96, the decrease is more negative by 1-2 %/decade, with a maximum trend of -9 %/decade in the lowermost stratosphere. The trends vary seasonally from about 12 to 17.5 km, with largest ozone decreases in winter and spring. Trends in tropospheric ozone are highly variable and depend on region. There are decreases or zero trends at the Canadian stations for 1970-96, and decreases of -2 to -8 %/decade for the mid-troposphere for 1980-96; the three European stations show increases for 1970-96, but trends are close to zero for two stations for 1980-96 and positive for one; there are increases in ozone for the three Japanese stations for 1970-96, but trends are either positive or zero for 1980-96; the U.S. stations show zero or slightly negative trends in tropospheric ozone after 1980. It is not possible to define reliably a mean tropospheric ozone trend for northern mid-latitudes, given the small number of stations and the large variability in trends. The integrated column trends derived from the sonde data are consistent with trends derived from both surface based and satellite measurements of the ozone column.

  9. Marginalized zero-altered models for longitudinal count data.

    PubMed

    Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A

    2016-10-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.

  10. Marginalized zero-altered models for longitudinal count data

    PubMed Central

    Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.

    2015-01-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423

  11. Scale-invariant puddles in graphene: Geometric properties of electron-hole distribution at the Dirac point.

    PubMed

    Najafi, M N; Nezhadhaghighi, M Ghasemi

    2017-03-01

    We characterize the carrier density profile of the ground state of graphene in the presence of particle-particle interaction and random charged impurity in zero gate voltage. We provide detailed analysis on the resulting spatially inhomogeneous electron gas, taking into account the particle-particle interaction and the remote Coulomb disorder on an equal footing within the Thomas-Fermi-Dirac theory. We present some general features of the carrier density probability measure of the graphene sheet. We also show that, when viewed as a random surface, the electron-hole puddles at zero chemical potential show peculiar self-similar statistical properties. Although the disorder potential is chosen to be Gaussian, we show that the charge field is non-Gaussian with unusual Kondev relations, which can be regarded as a new class of two-dimensional random-field surfaces. Using Schramm-Loewner (SLE) evolution, we numerically demonstrate that the ungated graphene has conformal invariance and the random zero-charge density contours are SLE_{κ} with κ=1.8±0.2, consistent with c=-3 conformal field theory.

  12. Vector solution for the mean electromagnetic fields in a layer of random particles

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Seker, S. S.; Levine, D. M.

    1986-01-01

    The mean electromagnetic fields are found in a layer of randomly oriented particles lying over a half space. A matrix-dyadic formulation of Maxwell's equations is employed in conjunction with the Foldy-Lax approximation to obtain equations for the mean fields. A two variable perturbation procedure, valid in the limit of small fractional volume, is then used to derive uncoupled equations for the slowly varying amplitudes of the mean wave. These equations are solved to obtain explicit expressions for the mean electromagnetic fields in the slab region in the general case of arbitrarily oriented particles and arbitrary polarization of the incident radiation. Numerical examples are given for the application to remote sensing of vegetation.

  13. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process.

    PubMed

    Wilson, Lorna R M; Hopcraft, Keith I

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  14. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process

    NASA Astrophysics Data System (ADS)

    Wilson, Lorna R. M.; Hopcraft, Keith I.

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  15. Simultaneous monitoring of static and dynamic intracranial pressure parameters from two separate sensors in patients with cerebral bleeds: comparison of findings.

    PubMed

    Eide, Per Kristian; Holm, Sverre; Sorteberg, Wilhelm

    2012-09-07

    We recently reported that in an experimental setting the zero pressure level of solid intracranial pressure (ICP) sensors can be altered by electrostatics discharges. Changes in the zero pressure level would alter the ICP level (mean ICP); whether spontaneous changes in mean ICP happen in clinical settings is not known. This can be addressed by comparing the ICP parameters level and waveform of simultaneous ICP signals. To this end, we retrieved our recordings in patients with cerebral bleeds wherein the ICP had been recorded simultaneously from two different sensors. During a time period of 10 years, 17 patients with cerebral bleeds were monitored with two ICP sensors simultaneously; sensor 1 was always a solid sensor while Sensor 2 was a solid -, a fluid - or an air-pouch sensor. The simultaneous signals were analyzed with automatic identification of the cardiac induced ICP waves. The output was determined in consecutive 6-s time windows, both with regard to the static parameter mean ICP and the dynamic parameters (mean wave amplitude, MWA, and mean wave rise time, MWRT). Differences in mean ICP, MWA and MWRT between the two sensors were determined. Transfer functions between the sensors were determined to evaluate how sensors reproduce the ICP waveform. Comparing findings in two solid sensors disclosed major differences in mean ICP in 2 of 5 patients (40%), despite marginal differences in MWA, MWRT, and linear phase magnitude and phase. Qualitative assessment of trend plots of mean ICP and MWA revealed shifts and drifts of mean ICP in the clinical setting. The transfer function analysis comparing the solid sensor with either the fluid or air-pouch sensors revealed more variable transfer function magnitude and greater differences in the ICP waveform derived indices. Simultaneous monitoring of ICP using two solid sensors may show marked differences in static ICP but close to identity in dynamic ICP waveforms. This indicates that shifts in ICP baseline pressure (sensor zero level) occur clinically; trend plots of the ICP parameters also confirm this. Solid sensors are superior to fluid - and air pouch sensors when evaluating the dynamic ICP parameters.

  16. Parametric Study of Urban-Like Topographic Statistical Moments Relevant to a Priori Modelling of Bulk Aerodynamic Parameters

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William

    2017-02-01

    For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.

  17. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  18. Comparative Evaluation of Antiplaque Efficacy of Coconut Oil Pulling and a Placebo, Among Dental College Students: A Randomized Controlled Trial

    PubMed Central

    Kulkarni, Suhas; Madupu, Padma Reddy; Doshi, Dolar; Bandari, Srikanth Reddy; Srilatha, Adepu

    2017-01-01

    Introduction Oil pulling, has been extensively used as traditional Indian folk remedy since many years to prevent dental diseases and for strengthening teeth and gums. Aim To compare and evaluate antiplaque efficacy of coconut oil pulling with a placebo among dental students, in Hyderabad city of India. Materials and Methods A randomized controlled study was carried out among 40 dental students. Out of 40, 20 subjects were randomly assigned to study group and other 20 to control group. Subjects in the study group were given the coconut oil and control group a placebo, and advised to rinse for 10 minutes, once daily in the morning for a period of seven days. Plaque levels were assessed on day zero, third and seventh day using Turesky-Gilmore-Glickman Modification of the Quigley-Hein Plaque Index (1970) for both the groups. Results The mean plaque scores showed a significant difference at baseline, third day and seventh day among both study (p<0.001) and control groups (p<0.001). Group wise comparison revealed, though the mean plaque scores were low among study group on third day and seventh day on comparison with the control group, significant difference was noticed only on the seventh day. Furthermore, the mean percentage reduction of plaque scores were also significant only on the seventh day with a high mean plaque reduction among study groups (p<0.001). Conclusion Oil pulling is effective in controlling plaque levels. PMID:29207824

  19. Electrical stimulation as a treatment intervention to improve function, edema or pain following acute lateral ankle sprains: A systematic review.

    PubMed

    Feger, Mark A; Goetschius, John; Love, Hailey; Saliba, Sue A; Hertel, Jay

    2015-11-01

    The purpose of this systematic review was to assess whether electrical stimulation (ES), when used in conjunction with a standard treatment, can reduce levels of functional impairment, edema, and pain compared to a standard treatment alone, in patients following a lateral ankle sprain. We searched PubMed, CINAHL, SportDiscus, and Medline (OVID) databases through June 2014 using the terms "ankle sprain or ankle sprains or ligament injury or ligamentous injury," and "electric stimulation or electric stimulation or electrotherapy." Our search identified four randomized control trials, of which, neuromuscular ES and high-voltage pulsed stimulation were the only two ES modalities utilized. Effect sizes and 95% confidence intervals (CI) were estimated using Cohen's d for comparison between treatment groups. Three of four effect sizes for function had 95% CI that crossed zero. Twenty-four of the thirty-two effect sizes for edema had 95% CI that crossed zero. All effect sizes for pain had 95% CI that crossed zero. Therefore, the use of ES is not recommended as a means to improve function, reduce edema, or decrease pain in the treatment of acute lateral ankle sprains. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A new mean estimator using auxiliary variables for randomized response models

    NASA Astrophysics Data System (ADS)

    Ozgul, Nilgun; Cingi, Hulya

    2013-10-01

    Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.

  1. Atherosclerotic Plaque in Patients with Zero Calcium Score at Coronary Computed Tomography Angiography

    PubMed Central

    Gabriel, Fabíola Santos; Gonçalves, Luiz Flávio Galvão; de Melo, Enaldo Vieira; Sousa, Antônio Carlos Sobral; Pinto, Ibraim Masciarelli Francisco; Santana, Sara Melo Macedo; de Matos, Carlos José Oliveira; Souto, Maria Júlia Silveira; Conceição, Flávio Mateus do Sacramento; Oliveira, Joselina Luzia Menezes

    2018-01-01

    Background In view of the high mortality for cardiovascular diseases, it has become necessary to stratify the main risk factors and to choose the correct diagnostic modality. Studies have demonstrated that a zero calcium score (CS) is characteristic of a low risk for cardiovascular events. However, the prevalence of individuals with coronary atherosclerotic plaques and zero CS is conflicting in the specialized literature. Objective To evaluate the frequency of patients with coronary atherosclerotic plaques, their degree of obstruction and associated factors in patients with zero CS and indication for coronary computed tomography angiography (CCTA). Methods This is a cross-sectional, prospective study with 367 volunteers with zero CS at CCTA in four diagnostic imaging centers in the period from 2011 to 2016. A significance level of 5% and 95% confidence interval were adopted. Results The frequency of atherosclerotic plaque in the coronary arteries in 367 patients with zero CS was 9.3% (34 individuals). In this subgroup, mean age was 52 ± 10 years, 18 (52.9%) were women and 16 (47%) had significant coronary obstructions (> 50%), with involvement of two or more segments in 4 (25%) patients. The frequency of non-obese individuals (90.6% vs 73.9%, p = 0.037) and alcohol drinkers (55.9% vs 34.8%, p = 0.015) was significantly higher in patients with atherosclerotic plaques, with an odds ratio of 3.4 for each of this variable. Conclusions The frequency of atherosclerotic plaque with zero CS was relatively high, indicating that the absence of calcification does not exclude the presence of plaques, many of which obstructive, especially in non-obese subjects and alcohol drinkers. PMID:29723329

  2. Some Curious Properties and Loci Problems Associated with Cubics and Other Polynomials

    ERIC Educational Resources Information Center

    de Alwis, Amal

    2012-01-01

    The article begins with a well-known property regarding tangent lines to a cubic polynomial that has distinct, real zeros. We were then able to generalize this property to any polynomial with distinct, real zeros. We also considered a certain family of cubics with two fixed zeros and one variable zero, and explored the loci of centroids of…

  3. Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions

    ERIC Educational Resources Information Center

    Desjardins, Christopher David

    2016-01-01

    The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…

  4. 33 CFR 154.2181 - Alternative testing program-Test requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CE test must check the calibrated range of each analyzer using a lower (zero) and upper (span... instrument, R = reference value of zero or high-level calibration gas introduced into the monitoring system... Difference Zero Span 1-Zero 1-Span 2-Zero 2-Span 3-Zero 3-Span Mean Difference = Calibration Error = % % (3...

  5. 49 CFR 571.226 - Standard No. 226; Ejection Mitigation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Zero displacement plane means, a vertical plane parallel to the vehicle longitudinal centerline and... millimeters beyond the zero displacement plane. S4.2.1.1No vehicle shall use movable glazing as the sole means..., target locations are identified (S5.2) and the zero displacement plane location is determined (S5.3). The...

  6. Predicting the Effects of Longitudinal Variables on Cost and Schedule Performance

    DTIC Science & Technology

    2007-03-01

    budget so that as cost growth occurs, it can be absorbed (Moore, 2003:2). This number padding is very tempting since it relieves the program...presence of a value, zero was entered for the missing variables because without any value assigned, the analysis software would ignore all data for the...program in question, reducing the already small dataset. Second, if we considered the variable in isolation, we removed the zero and left the field

  7. Effect of cinnamon on glucose control and lipid parameters.

    PubMed

    Baker, William L; Gutierrez-Williams, Gabriela; White, C Michael; Kluger, Jeffrey; Coleman, Craig I

    2008-01-01

    To perform a meta-analysis of randomized controlled trials of cinnamon to better characterize its impact on glucose and plasma lipids. A systematic literature search through July 2007 was conducted to identify randomized placebo-controlled trials of cinnamon that reported data on A1C, fasting blood glucose (FBG), or lipid parameters. The mean change in each study end point from baseline was treated as a continuous variable, and the weighted mean difference was calculated as the difference between the mean value in the treatment and control groups. A random-effects model was used. Five prospective randomized controlled trials (n = 282) were identified. Upon meta-analysis, the use of cinnamon did not significantly alter A1C, FBG, or lipid parameters. Subgroup and sensitivity analyses did not significantly change the results. Cinnamon does not appear to improve A1C, FBG, or lipid parameters in patients with type 1 or type 2 diabetes.

  8. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  9. New variable selection methods for zero-inflated count data with applications to the substance abuse field

    PubMed Central

    Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming

    2011-01-01

    Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207

  10. Comparison of Curvature Between the Zero-P Spacer and Traditional Cage and Plate After 3-Level Anterior Cervical Discectomy and Fusion: Mid-term Results.

    PubMed

    Chen, Yuanyuan; Liu, Yang; Chen, Huajiang; Cao, Peng; Yuan, Wen

    2017-10-01

    A retrospective study. To compare clinical and radiologic outcomes of 3-level anterior cervical discectomy and fusion between a zero-profile (Zero-P) spacer and a traditional plate in cases of symptomatic cervical spine spondylosis. Anterior cervical decompression and fusion is indicated for patients with anterior compression or stenosis of the spinal cord. The Zero-P spacers have been used for anterior cervical interbody fusion of 1 or 2 segments. However, there is a paucity of published clinical data regarding the exact impact of the device on cervical curvature of 3-level fixation. Clinical and radiologic data of 71 patients undergoing 3-level anterior cervical discectomy and fusion from January 2010 to January 2012 were collected. Zero-P spacer was implanted in 33 patients, and in 38 cases stabilization was accomplished using an anterior cervical plate and intervertebral cage. Patients were followed for a mean of 30.8 months (range, 24-36 mo) after surgery. Fusion rates, changes in cervical lordosis, and degeneration of adjacent segments were analyzed. Dysphagia was assessed using the Bazaz score, and clinical outcomes were analyzed using the Neck Disability Index and Japanese Orthopedic Association scoring system. Neurological outcomes did not differ significantly between groups. Significantly less dysphagia was seen at 2- and 6-month follow-up in patients with the Zero-P implant (P<0.05); however, there was significant less cervical lordosis and the lordosis across the fusion in patients with the Zero-P implant (both P<0.05). Degenerative changes in the adjacent segments occurred in 4 patients in the Zero-P group and 6 patients in the standard-plate group (P=0.742); however, no revision surgery was done. Clinical results for the Zero-P spacer were satisfactory. The device is superior to the traditional plate in preventing postoperative dysphagia; however, it is inferior at restoring cervical lordosis. It may not provide better sagittal cervical alignment reconstruction in 3-level fixation. Prospective randomized trials with more patients and longer follow-up periods are required to confirm these observations.

  11. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  12. Rotational diffusion of a molecular cat

    NASA Astrophysics Data System (ADS)

    Katz-Saporta, Ori; Efrati, Efi

    We show that a simple isolated system can perform rotational random walk on account of internal excitations alone. We consider the classical dynamics of a ''molecular cat'': a triatomic molecule connected by three harmonic springs with non-zero rest lengths, suspended in free space. In this system, much like for falling cats, the angular momentum constraint is non-holonomic allowing for rotations with zero overall angular momentum. The geometric nonlinearities arising from the non-zero rest lengths of the springs suffice to break integrability and lead to chaotic dynamics. The coupling of the non-integrability of the system and its non-holonomic nature results in an angular random walk of the molecule. We study the properties and dynamics of this angular motion analytically and numerically. For low energy excitations the system displays normal-mode-like motion, while for high enough excitation energy we observe regular random-walk. In between, at intermediate energies we observe an angular Lévy-walk type motion associated with a fractional diffusion coefficient interpolating between the two regimes.

  13. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  15. Uncertainty in Random Forests: What does it mean in a spatial context?

    NASA Astrophysics Data System (ADS)

    Klump, Jens; Fouedjio, Francky

    2017-04-01

    Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial correlation. However, in cases of weak spatial correlation Random Forest, as a nonparametric method, may give the better results once we have a better understanding of the meaning of its uncertainty measures in a spatial context. References [1] Kirkwood, C., M. Cave, D. Beamish, S. Grebby, and A. Ferreira (2016), A machine learning approach to geochemical mapping, Journal of Geochemical Exploration, 163, 28-40, doi:10.1016/j.gexplo.2016.05.003.

  16. A zero-augmented generalized gamma regression calibration to adjust for covariate measurement error: A case of an episodically consumed dietary intake

    PubMed Central

    Agogo, George O.

    2017-01-01

    Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599

  17. The probability of false positives in zero-dimensional analyses of one-dimensional kinematic, force and EMG trajectories.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2016-06-14

    A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Rare event simulation in radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollman, Craig

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less

  19. New approach application of data transformation in mean centering of ratio spectra method

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.

    2015-05-01

    Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.

  20. Mathematical and physical meaning of the Bell inequalities

    NASA Astrophysics Data System (ADS)

    Santos, Emilio

    2016-09-01

    It is shown that the Bell inequalities are closely related to the triangle inequalities involving distance functions amongst pairs of random variables with values \\{0,1\\}. A hidden variables model may be defined as a mapping between a set of quantum projection operators and a set of random variables. The model is noncontextual if there is a joint probability distribution. The Bell inequalities are necessary conditions for its existence. The inequalities are most relevant when measurements are performed at space-like separation, thus showing a conflict between quantum mechanics and local realism (Bell's theorem). The relations of the Bell inequalities with contextuality, the Kochen-Specker theorem, and quantum entanglement are briefly discussed.

  1. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  2. Dental caries status of preschool children in Hong Kong.

    PubMed

    Chu, C H; Fung, D S; Lo, E C

    1999-12-11

    To describe the dental caries status of preschool children in Hong Kong and factors which affect their caries status. 658 preschool children aged 4 to 6 years from six randomly selected kindergartens in Hong Kong were surveyed in December 1997. A questionnaire to investigate possible explanatory variables for caries status was completed by their parents. Dental caries was diagnosed according to the criteria recommended by the World Health Organization (1997). Caries experience as measured by the mean number of decayed, missing and filled primary teeth (dmft) of the 4-, 5-, and 6-year-old children were found to be 0.9, 1.8, and 3.3 respectively. Overall, 61% of the children had a zero dmft score. Children born in Mainland China had a higher mean dmft score (4.6) than those born in Hong Kong (1.4). Statistically significant correlations were found between the children's dental caries status and their oral health practices as well as their socio-economic background. Parents' education level, dental knowledge and attitudes were also associated with the children's dental caries experience. In general, the caries status of Hong Kong Chinese preschool children was similar to that of children in industrialised countries and was better than that of children in the nearby areas. However, special dental programmes should be made available to children from lower socio-economic classes and new immigrants from Mainland China because they are the high risk groups for caries in Hong Kong.

  3. Tracking problem for electromechanical system under influence of external perturbations

    NASA Astrophysics Data System (ADS)

    Kochetkov, Sergey A.; Krasnova, Svetlana A.; Utkin, Victor A.

    2017-01-01

    For electromechanical objects the new control algorithms (vortex algprithms) are developed on the base of discontinuous functions. The distinctive feature of these algorithms is providing of asymptotical convergence of the output variables to zero under influence of unknown bounded disturbances of prescribed class. The advantages of proposed approach is demonstrated for direct current motor with permanent excitation. It is shown that inner variables of the system converge to unknown bounded disturbances and guarantee asymptotical convergence of output variables to zero.

  4. Comparison of the ballistic contractile responses generated during microstimulation of single human motor axons with brief irregular and regular stimuli.

    PubMed

    Leitch, Michael; Macefield, Vaughan G

    2017-08-01

    Ballistic contractions are induced by brief, high-frequency (60-100 Hz) trains of action potentials in motor axons. During ramp voluntary contractions, human motoneurons exhibit significant discharge variability of ∼20% and have been shown to be advantageous to the neuromuscular system. We hypothesized that ballistic contractions incorporating discharge variability would generate greater isometric forces than regular trains with zero variability. High-impedance tungsten microelectrodes were inserted into human fibular nerve, and single motor axons were stimulated with both irregular and constant-frequency stimuli at mean frequencies ranging from 57.8 to 68.9 Hz. Irregular trains generated significantly greater isometric peak forces than regular trains over identical mean frequencies. The high forces generated by ballistic contractions are not based solely on high frequencies, but rather a combination of high firing rates and discharge irregularity. It appears that irregular ballistic trains take advantage of the "catchlike property" of muscle, allowing augmentation of force. Muscle Nerve 56: 292-297, 2017. © 2016 Wiley Periodicals, Inc.

  5. Residual Defect Density in Random Disks Deposits.

    PubMed

    Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A C

    2015-08-03

    We investigate the residual distribution of structural defects in very tall packings of disks deposited randomly in large channels. By performing simulations involving the sedimentation of up to 50 × 10(9) particles we find all deposits to consistently show a non-zero residual density of defects obeying a characteristic power-law as a function of the channel width. This remarkable finding corrects the widespread belief that the density of defects should vanish algebraically with growing height. A non-zero residual density of defects implies a type of long-range spatial order in the packing, as opposed to only local ordering. In addition, we find deposits of particles to involve considerably less randomness than generally presumed.

  6. The life cycles of Be viscous decretion discs: The case of ω CMa

    NASA Astrophysics Data System (ADS)

    Ghoreyshi, M. R.; Carciofi, A. C.; Rímulo, L. R.; Vieira, R. G.; Faes, D. M.; Baade, D.; Bjorkman, J. E.; Otero, S.; Rivinius, Th

    2018-06-01

    We analyzed V-band photometry of the Be star ω CMa, obtained during the last four decades, during which the star went through four complete cycles of disc formation and dissipation. The data were simulated by hydrodynamic models based on a time-dependent implementation of the viscous decretion disc (VDD) paradigm, in which a disc around a fast-spinning Be star is formed by material ejected by the star and driven to progressively larger orbits by means of viscous torques. Our simulations offer a good description of the photometric variability during phases of disc formation and dissipation, which suggests that the VDD model adequately describes the structural evolution of the disc. Furthermore, our analysis allowed us to determine the viscosity parameter α, as well as the net mass and angular momentum (AM) loss rates. We find that α is variable, ranging from 0.1 to 1.0, not only from cycle to cycle but also within a given cycle. Additionally, build-up phases usually have larger values of α than the dissipation phases. Furthermore, during dissipation the outward AM flux is not necessarily zero, meaning that ω CMa does not experience a true quiescence but, instead, switches between a high to a low AM loss rate during which the disc quickly assumes an overall lower density but never zero. We confront the average AM loss rate with predictions from stellar evolution models for fast-rotating stars, and find that our measurements are smaller by more than one order of magnitude.

  7. Enhancing Multimedia Imbalanced Concept Detection Using VIMP in Random Forests.

    PubMed

    Sadiq, Saad; Yan, Yilin; Shyu, Mei-Ling; Chen, Shu-Ching; Ishwaran, Hemant

    2016-07-01

    Recent developments in social media and cloud storage lead to an exponential growth in the amount of multimedia data, which increases the complexity of managing, storing, indexing, and retrieving information from such big data. Many current content-based concept detection approaches lag from successfully bridging the semantic gap. To solve this problem, a multi-stage random forest framework is proposed to generate predictor variables based on multivariate regressions using variable importance (VIMP). By fine tuning the forests and significantly reducing the predictor variables, the concept detection scores are evaluated when the concept of interest is rare and imbalanced, i.e., having little collaboration with other high level concepts. Using classical multivariate statistics, estimating the value of one coordinate using other coordinates standardizes the covariates and it depends upon the variance of the correlations instead of the mean. Thus, conditional dependence on the data being normally distributed is eliminated. Experimental results demonstrate that the proposed framework outperforms those approaches in the comparison in terms of the Mean Average Precision (MAP) values.

  8. Heterogeneity in the Strehler-Mildvan general theory of mortality and aging.

    PubMed

    Zheng, Hui; Yang, Yang; Land, Kenneth C

    2011-02-01

    This study examines and further develops the classic Strehler-Mildvan (SM) general theory of mortality and aging. Three predictions from the SM theory are tested by examining the age dependence of mortality patterns for 42 countries (including developed and developing countries) over the period 1955-2003. By applying finite mixture regression models, principal component analysis, and random-effects panel regression models, we find that (1) the negative correlation between the initial adulthood mortality rate and the rate of increase in mortality with age derived in the SM theory exists but is not constant; (2) within the SM framework, the implied age of expected zero vitality (expected maximum survival age) also is variable over time; (3) longevity trajectories are not homogeneous among the countries; (4) Central American and Southeast Asian countries have higher expected age of zero vitality than other countries in spite of relatively disadvantageous national ecological systems; (5) within the group of Central American and Southeast Asian countries, a more disadvantageous national ecological system is associated with a higher expected age of zero vitality; and (6) larger agricultural and food productivities, higher labor participation rates, higher percentages of population living in urban areas, and larger GDP per capita and GDP per unit of energy use are important beneficial national ecological system factors that can promote survival. These findings indicate that the SM theory needs to be generalized to incorporate heterogeneity among human populations.

  9. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2014-08-01

    Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it needs to be subjected to a future validation.

  10. Impact of including or excluding both-armed zero-event studies on using standard meta-analysis methods for rare event outcome: a simulation study

    PubMed Central

    Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana

    2016-01-01

    Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment effects are unclear. PMID:27531725

  11. On Probability Domains IV

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2017-12-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  12. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  13. 24-hour glucose profiles on diets varying in protein content and glycemic index.

    PubMed

    van Baak, Marleen A

    2014-08-04

    Evidence is increasing that the postprandial state is an important factor contributing to the risk of chronic diseases. Not only mean glycemia, but also glycemic variability has been implicated in this effect. In this exploratory study, we measured 24-h glucose profiles in 25 overweight participants in a long-term diet intervention study (DIOGENES study on Diet, Obesity and Genes), which had been randomized to four different diet groups consuming diets varying in protein content and glycemic index. In addition, we compared 24-h glucose profiles in a more controlled fashion, where nine other subjects followed in random order the same four diets differing in carbohydrate content by 10 energy% and glycemic index by 20 units during three days. Meals were provided in the lab and had to be eaten at fixed times during the day. No differences in mean glucose concentration or glucose variability (SD) were found between diet groups in the DIOGENES study. In the more controlled lab study, mean 24-h glucose concentrations were also not different. Glucose variability (SD and CONGA1), however, was lower on the diet combining a lower carbohydrate content and GI compared to the diet combining a higher carbohydrate content and GI. These data suggest that diets with moderate differences in carbohydrate content and GI do not affect mean 24-h or daytime glucose concentrations, but may result in differences in the variability of the glucose level in healthy normal weight and overweight individuals.

  14. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  15. 40 CFR 610.11 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (17) “Data fleet” means a fleet of automobiles tested at “zero device-miles” in “baseline.... (19) “Zero device-miles” means the period of time between retrofit installation and the accumulation...” means the engineering analysis performed by EPA prior to testing prescribed by the Administrator based...

  16. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    USGS Publications Warehouse

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.

  17. Climate and life-history evolution in evening primroses (Oenothera, Onagraceae): a phylogenetic comparative analysis.

    PubMed

    Evans, Margaret E K; Hearn, David J; Hahn, William J; Spangle, Jennifer M; Venable, D Lawrence

    2005-09-01

    Evolutionary ecologists have long sought to understand the conditions under which perennial (iteroparous) versus annual (semelparous) plant life histories are favored. We evaluated the idea that aridity and variation in the length of droughts should favor the evolution of an annual life history, both by decreasing adult survival and by increasing the potential for high seedling survival via reduced plant cover. We calculated phylogenetically independent contrasts of climate with respect to life history in a clade of winter-establishing evening primroses (sections Anogra and Kleinia; Oenothera; Onagraceae), which includes seven annuals, 12 perennials, and two variable taxa. Climate variables were quantified from long-term records at weather stations near collection localities. To explicitly account for phylogenetic uncertainty, contrasts were calculated on a random sample of phylogenetic trees from the posterior distribution of a Bayesian analysis of DNA sequence data. Statements of association are based on comparing the per-tree mean contrast, which has a null expectation of zero, to a set of per-tree mean contrasts calculated on the same trees, after randomizing the climate data. As predicted, increased annual aridity, increased annual potential evapotranspiration, and decreased annual precipitation were associated with transitions to the annual habit, but these trends were not significantly different from the null pattern. Transitions to the annual habit were not significantly associated with increases in one measure of aridity in summer nor with increased summer drought, but they were associated with significantly increased maximum summer temperatures. In winter, increased aridity and decreased precipitation were significantly associated with transitions to the annual habit. Changes in life history were not significantly associated with changes in the coefficient of variation of precipitation, either on an annual or seasonal (summer vs. winter) basis. Though we cannot attribute causality on the basis of a correlational, historical study, our results are consistent with the idea that increased heat and drought at certain times of the year favor the evolution of the annual habit. Increased heat in summer may cause adult survival to decline, while increased aridity and decreased precipitation in the season of seedling recruitment (winter) may favor a drought-avoiding, short-lived annual strategy. Not all of the predicted patterns were observed: the capability for drought-induced dormancy may preclude change in habit in response to summer drought in our study group.

  18. Origin and implications of zero degeneracy in networks spectra.

    PubMed

    Yadav, Alok; Jalan, Sarika

    2015-04-01

    The spectra of many real world networks exhibit properties which are different from those of random networks generated using various models. One such property is the existence of a very high degeneracy at the zero eigenvalue. In this work, we provide all the possible reasons behind the occurrence of the zero degeneracy in the network spectra, namely, the complete and partial duplications, as well as their implications. The power-law degree sequence and the preferential attachment are the properties which enhances the occurrence of such duplications and hence leading to the zero degeneracy. A comparison of the zero degeneracy in protein-protein interaction networks of six different species and in their corresponding model networks indicates importance of the degree sequences and the power-law exponent for the occurrence of zero degeneracy.

  19. A new approach used to explore associations of current Ambrosia pollen levels with current and past meteorological elements

    NASA Astrophysics Data System (ADS)

    Matyasovszky, István; Makra, László; Csépe, Zoltán; Deák, Áron József; Pál-Molnár, Elemér; Fülöp, Andrea; Tusnády, Gábor

    2015-09-01

    The paper examines the sensitivity of daily airborne Ambrosia (ragweed) pollen levels of a current pollen season not only on daily values of meteorological variables during this season but also on the past meteorological conditions. The results obtained from a 19-year data set including daily ragweed pollen counts and ten daily meteorological variables are evaluated with special focus on the interactions between the phyto-physiological processes and the meteorological elements. Instead of a Pearson correlation measuring the strength of the linear relationship between two random variables, a generalised correlation that measures every kind of relationship between random vectors was used. These latter correlations between arrays of daily values of the ten meteorological elements and the array of daily ragweed pollen concentrations during the current pollen season were calculated. For the current pollen season, the six most important variables are two temperature variables (mean and minimum temperatures), two humidity variables (dew point depression and rainfall) and two variables characterising the mixing of the air (wind speed and the height of the planetary boundary layer). The six most important meteorological variables before the current pollen season contain four temperature variables (mean, maximum, minimum temperatures and soil temperature) and two variables that characterise large-scale weather patterns (sea level pressure and the height of the planetary boundary layer). Key periods of the past meteorological variables before the current pollen season have been identified. The importance of this kind of analysis is that a knowledge of the past meteorological conditions may contribute to a better prediction of the upcoming pollen season.

  20. A new approach used to explore associations of current Ambrosia pollen levels with current and past meteorological elements.

    PubMed

    Matyasovszky, István; Makra, László; Csépe, Zoltán; Deák, Áron József; Pál-Molnár, Elemér; Fülöp, Andrea; Tusnády, Gábor

    2015-09-01

    The paper examines the sensitivity of daily airborne Ambrosia (ragweed) pollen levels of a current pollen season not only on daily values of meteorological variables during this season but also on the past meteorological conditions. The results obtained from a 19-year data set including daily ragweed pollen counts and ten daily meteorological variables are evaluated with special focus on the interactions between the phyto-physiological processes and the meteorological elements. Instead of a Pearson correlation measuring the strength of the linear relationship between two random variables, a generalised correlation that measures every kind of relationship between random vectors was used. These latter correlations between arrays of daily values of the ten meteorological elements and the array of daily ragweed pollen concentrations during the current pollen season were calculated. For the current pollen season, the six most important variables are two temperature variables (mean and minimum temperatures), two humidity variables (dew point depression and rainfall) and two variables characterising the mixing of the air (wind speed and the height of the planetary boundary layer). The six most important meteorological variables before the current pollen season contain four temperature variables (mean, maximum, minimum temperatures and soil temperature) and two variables that characterise large-scale weather patterns (sea level pressure and the height of the planetary boundary layer). Key periods of the past meteorological variables before the current pollen season have been identified. The importance of this kind of analysis is that a knowledge of the past meteorological conditions may contribute to a better prediction of the upcoming pollen season.

  1. An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating

    NASA Astrophysics Data System (ADS)

    Ratcliffe, M. J.; Lieven, N. A. J.

    1999-03-01

    Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.

  2. Testing the effectiveness of in-home behavioral economics strategies to increase vegetable intake, liking, and variety among children residing in households that receive food assistance.

    PubMed

    Leak, Tashara M; Swenson, Alison; Vickers, Zata; Mann, Traci; Mykerezi, Elton; Redden, Joseph P; Rendahl, Aaron; Reicks, Marla

    2015-01-01

    To test the effectiveness of behavioral economics strategies for increasing vegetable intake, variety, and liking among children residing in homes receiving food assistance. A randomized controlled trial with data collected at baseline, once weekly for 6 weeks, and at study conclusion. Family homes. Families with a child (9-12 years) will be recruited through community organizations and randomly assigned to an intervention (n = 36) or control (n = 10) group. The intervention group will incorporate a new behavioral economics strategy during home dinner meal occasions each week for 6 weeks. Strategies are simple and low-cost. The primary dependent variable will be child's dinner meal vegetable consumption based on weekly reports by caregivers. Fixed independent variables will include the strategy and week of strategy implementation. Secondary dependent variables will include vegetable liking and variety of vegetables consumed based on data collected at baseline and study conclusion. Mean vegetable intake for each strategy across families will be compared using a mixed-model analysis of variance with a random effect for child. In additionally, overall mean changes in vegetable consumption, variety, and liking will be compared between intervention and control groups. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  3. Multinomial model and zero-inflated gamma model to study time spent on leisure time physical activity: an example of ELSA-Brasil.

    PubMed

    Nobre, Aline Araújo; Carvalho, Marilia Sá; Griep, Rosane Härter; Fonseca, Maria de Jesus Mendes da; Melo, Enirtes Caetano Prates; Santos, Itamar de Souza; Chor, Dora

    2017-08-17

    To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil - Estudo Longitudinal de Saúde do Adulto ) have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week) spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income. The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.

  4. Rightfulness of Summation Cut-Offs in the Albedo Problem with Gaussian Fluctuations of the Density of Scatterers

    NASA Astrophysics Data System (ADS)

    Selim, M. M.; Bezák, V.

    2003-06-01

    The one-dimensional version of the radiative transfer problem (i.e. the so-called rod model) is analysed with a Gaussian random extinction function (x). Then the optical length X = 0 Ldx(x) is a Gaussian random variable. The transmission and reflection coefficients, T(X) and R(X), are taken as infinite series. When these series (and also when the series representing T 2(X), T 2(X), R(X)T(X), etc.) are averaged, term by term, according to the Gaussian statistics, the series become divergent after averaging. As it was shown in a former paper by the authors (in Acta Physica Slovaca (2003)), a rectification can be managed when a `modified' Gaussian probability density function is used, equal to zero for X > 0 and proportional to the standard Gaussian probability density for X > 0. In the present paper, the authors put forward an alternative, showing that if the m.s.r. of X is sufficiently small in comparison with & $bar X$ ; , the standard Gaussian averaging is well functional provided that the summation in the series representing the variable T m-j (X)R j (X) (m = 1,2,..., j = 1,...,m) is truncated at a well-chosen finite term. The authors exemplify their analysis by some numerical calculations.

  5. Evaluating collective significance of climatic trends: A comparison of methods on synthetic data

    NASA Astrophysics Data System (ADS)

    Huth, Radan; Dubrovský, Martin

    2017-04-01

    The common approach to determine whether climatic trends are significantly different from zero is to conduct individual (local) tests at each single site (station or gridpoint). Whether the number of sites where the trends are significantly non-zero can or cannot occur by random, is almost never evaluated in trend studies. That is, collective (global) significance of trends is ignored. We compare three approaches to evaluating collective statistical significance of trends at a network of sites, using the following statistics: (i) the number of successful local tests (a successful test means here a test in which the null hypothesis of no trend is rejected); this is a standard way of assessing collective significance in various applications in atmospheric sciences; (ii) the smallest p-value among the local tests (Walker test); and (iii) the counts of positive and negative trends regardless of their magnitudes and local significance. The third approach is a new procedure that we propose; the rationale behind it is that it is reasonable to assume that the prevalence of one sign of trends at individual sites is indicative of a high confidence in the trend not being zero, regardless of the (in)significance of individual local trends. A potentially large amount of information contained in trends that are not locally significant, which are typically deemed irrelevant and neglected, is thus not lost and is retained in the analysis. In this contribution we examine the feasibility of the proposed way of significance testing on synthetic data, produced by a multi-site stochastic generator, and compare it with the two other ways of assessing collective significance, which are well established now. The synthetic dataset, mimicking annual mean temperature on an array of stations (or gridpoints), is constructed assuming a given statistical structure characterized by (i) spatial separation (density of the station network), (ii) local variance, (iii) temporal and spatial autocorrelations, and (iv) the trend magnitude. The probabilistic distributions of the three test statistics (null distributions) and critical values of the tests are determined from multiple realizations of the synthetic dataset, in which no trend is imposed at each site (that is, any trend is a result of random fluctuations only). The procedure is then evaluated by determining the type II error (the probability of a false detection of a trend) in the presence of a trend with a known magnitude, for which the synthetic dataset with an imposed spatially uniform non-zero trend is used. A sensitivity analysis is conducted for various combinations of the trend magnitude and spatial autocorrelation.

  6. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.

    2011-01-01

    A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.

  7. Simultaneous monitoring of static and dynamic intracranial pressure parameters from two separate sensors in patients with cerebral bleeds: comparison of findings

    PubMed Central

    2012-01-01

    Background We recently reported that in an experimental setting the zero pressure level of solid intracranial pressure (ICP) sensors can be altered by electrostatics discharges. Changes in the zero pressure level would alter the ICP level (mean ICP); whether spontaneous changes in mean ICP happen in clinical settings is not known. This can be addressed by comparing the ICP parameters level and waveform of simultaneous ICP signals. To this end, we retrieved our recordings in patients with cerebral bleeds wherein the ICP had been recorded simultaneously from two different sensors. Materials and Methods: During a time period of 10 years, 17 patients with cerebral bleeds were monitored with two ICP sensors simultaneously; sensor 1 was always a solid sensor while Sensor 2 was a solid -, a fluid - or an air-pouch sensor. The simultaneous signals were analyzed with automatic identification of the cardiac induced ICP waves. The output was determined in consecutive 6-s time windows, both with regard to the static parameter mean ICP and the dynamic parameters (mean wave amplitude, MWA, and mean wave rise time, MWRT). Differences in mean ICP, MWA and MWRT between the two sensors were determined. Transfer functions between the sensors were determined to evaluate how sensors reproduce the ICP waveform. Results Comparing findings in two solid sensors disclosed major differences in mean ICP in 2 of 5 patients (40%), despite marginal differences in MWA, MWRT, and linear phase magnitude and phase. Qualitative assessment of trend plots of mean ICP and MWA revealed shifts and drifts of mean ICP in the clinical setting. The transfer function analysis comparing the solid sensor with either the fluid or air-pouch sensors revealed more variable transfer function magnitude and greater differences in the ICP waveform derived indices. Conclusions Simultaneous monitoring of ICP using two solid sensors may show marked differences in static ICP but close to identity in dynamic ICP waveforms. This indicates that shifts in ICP baseline pressure (sensor zero level) occur clinically; trend plots of the ICP parameters also confirm this. Solid sensors are superior to fluid – and air pouch sensors when evaluating the dynamic ICP parameters. PMID:22958653

  8. Leveraging prognostic baseline variables to gain precision in randomized trials

    PubMed Central

    Colantuoni, Elizabeth; Rosenblum, Michael

    2015-01-01

    We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751

  9. Synthesis, Characterization and Reactivity of Nanostructured Zero-Valent Iron Particles for Degradation of Azo Dyes

    NASA Astrophysics Data System (ADS)

    Mikhailov, Ivan; Levina, Vera; Leybo, Denis; Masov, Vsevolod; Tagirov, Marat; Kuznetsov, Denis

    Nanostructured zero-valent iron (NSZVI) particles were synthesized by the method of ferric ion reduction with sodium borohydride with subsequent drying and passivation at room temperature in technical grade nitrogen. The obtained sample was characterized by means of X-ray powder diffraction, scanning electron microscopy, transmission electron microscopy and dynamic light scattering studies. The prepared NSZVI particles represent 100-200nm aggregates, which consist of 20-30nm iron nanoparticles in zero-valent oxidation state covered by thin oxide shell. The reactivity of the NSZVI sample, as the removal efficiency of refractory azo dyes, was investigated in this study. Two azo dye compounds, namely, orange G and methyl orange, are commonly detected in waste water of textile production. Experimental variables such as NSZVI dosage, initial dye concentration and solution pH were investigated. The kinetic rates of degradation of both dyes by NSZVI increased with the decrease of solution pH from 10 to 3 and with the increase of NSZVI dosage, but decreased with the increase of initial dye concentration. The removal efficiencies achieved for both orange G and methyl orange were higher than 90% after 80min of treatment.

  10. Morinda citrifolia (Noni) as an Anti-Inflammatory Treatment in Women with Primary Dysmenorrhoea: A Randomised Double-Blind Placebo-Controlled Trial.

    PubMed

    Fletcher, H M; Dawkins, J; Rattray, C; Wharfe, G; Reid, M; Gordon-Strachan, G

    2013-01-01

    Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo.

  11. Morinda citrifolia (Noni) as an Anti-Inflammatory Treatment in Women with Primary Dysmenorrhoea: A Randomised Double-Blind Placebo-Controlled Trial

    PubMed Central

    Fletcher, H. M.; Dawkins, J.; Rattray, C.; Wharfe, G.; Reid, M.; Gordon-Strachan, G.

    2013-01-01

    Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo. PMID:23431314

  12. Obtaining orthotropic elasticity tensor using entries zeroing method.

    NASA Astrophysics Data System (ADS)

    Gierlach, Bartosz; Danek, Tomasz

    2017-04-01

    A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal rotation. Computations were parallelized with OpenMP to decrease computational time what enables different tensors to be processed by different threads. As a result the distributions of rotated tensor entries values were obtained. For the entries which were to be zeroed we can observe almost normal distributions having mean equal to zero or sum of two normal distributions having inverse means. Non-zero entries represent different distributions with two or three maxima. Analysis of obtained results shows that described method produces consistent values of quaternions used to rotate tensors. Despite of less complex target function in a process of optimization in comparison to common approach, entries zeroing method provides results which can be applied to obtain an orthotropic tensor with good reliability. Modification of the method can produce also a tool for obtaining effective tensors belonging to another symmetry classes. This research was supported by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.

  13. A randomized pilot study comparing zero-calorie alternate-day fasting to daily caloric restriction in adults with obesity

    PubMed Central

    Catenacci, Victoria A.; Pan, Zhaoxing; Ostendorf, Danielle; Brannon, Sarah; Gozansky, Wendolyn S.; Mattson, Mark P.; Martin, Bronwen; MacLean, Paul S.; Melanson, Edward L.; Donahoo, William Troy

    2016-01-01

    Objective To evaluate the safety and tolerability of alternate-day fasting (ADF) and to compare changes in weight, body composition, lipids, and insulin sensitivity index (Si) to those produced by a standard weight loss diet, moderate daily caloric restriction (CR). Methods Adults with obesity (BMI ≥30 kg/m2, age 18-55) were randomized to either zero-calorie ADF (n=14) or CR (-400 kcal/day, n=12) for 8 weeks. Outcomes were measured at the end of the 8-week intervention and after 24 weeks of unsupervised follow-up. Results No adverse effects were attributed to ADF and 93% completed the 8-week ADF protocol. At 8 weeks, ADF achieved a 376 kcal/day greater energy deficit, however there were no significant between-group differences in change in weight (mean±SE; ADF -8.2±0.9 kg, CR -7.1±1.0 kg), body composition, lipids, or Si. After 24 weeks of unsupervised follow-up, there were no significant differences in weight regain, however changes from baseline in % fat mass and lean mass were more favorable in ADF. Conclusions ADF is a safe and tolerable approach to weight loss. ADF produced similar changes in weight, body composition, lipids and Si at 8 weeks and did not appear to increase risk for weight regain 24 weeks after completing the intervention. PMID:27569118

  14. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    PubMed

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  15. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach

    PubMed Central

    Mohammadi, Tayeb; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493

  16. Criticality of the mean-field spin-boson model: boson state truncation and its scaling analysis

    NASA Astrophysics Data System (ADS)

    Hou, Y.-H.; Tong, N.-H.

    2010-11-01

    The spin-boson model has nontrivial quantum phase transitions at zero temperature induced by the spin-boson coupling. The bosonic numerical renormalization group (BNRG) study of the critical exponents β and δ of this model is hampered by the effects of boson Hilbert space truncation. Here we analyze the mean-field spin boson model to figure out the scaling behavior of magnetization under the cutoff of boson states N b . We find that the truncation is a strong relevant operator with respect to the Gaussian fixed point in 0 < s < 1/2 and incurs the deviation of the exponents from the classical values. The magnetization at zero bias near the critical point is described by a generalized homogeneous function (GHF) of two variables τ = α - α c and x = 1/ N b . The universal function has a double-power form and the powers are obtained analytically as well as numerically. Similarly, m( α = α c ) is found to be a GHF of γ and x. In the regime s > 1/2, the truncation produces no effect. Implications of these findings to the BNRG study are discussed.

  17. 40 CFR Appendix D to Part 52 - Determination of Sulfur Dioxide Emissions From Stationary Sources by Continuous Monitors

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean value... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean and... operation when the pollutant concentration at the time for the measurement is zero. 1.6Calibration Drift...

  18. 40 CFR Appendix D to Part 52 - Determination of Sulfur Dioxide Emissions From Stationary Sources by Continuous Monitors

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean value... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean and... operation when the pollutant concentration at the time for the measurement is zero. 1.6Calibration Drift...

  19. 40 CFR Appendix D to Part 52 - Determination of Sulfur Dioxide Emissions From Stationary Sources by Continuous Monitors

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean value... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean and... operation when the pollutant concentration at the time for the measurement is zero. 1.6Calibration Drift...

  20. Conceptualizing and Testing Random Indirect Effects and Moderated Mediation in Multilevel Models: New Procedures and Recommendations

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Preacher, Kristopher J.; Gil, Karen M.

    2006-01-01

    The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects.…

  1. Cost Benefit Analysis of a Utility Scale Waste-to-Energy/Concentrating Solar Power Hybrid Facility at Fort Bliss

    DTIC Science & Technology

    2012-06-01

    installations for Energy, Waste, and Water. This means Fort Bliss will strive to become Net Zero Energy, Net Zero Waste , and Net Zero Water in the coming...years. Net Zero Energy requires Fort Bliss to produce as much energy on-installation as it consumes annually. Net Zero Waste aims to reduce, reuse...become Net Zero Energy and Net Zero Waste by 2020. A WtE facility actually goes well beyond Fort Bliss’ Net Zero Energy mission. That mission

  2. Counseling Outcomes from 1990 to 2008 for School-Age Youth with Depression: A Meta-Analysis

    ERIC Educational Resources Information Center

    Erford, Bradley T.; Erford, Breann M.; Lattanzi, Gina; Weller, Janet; Schein, Hallie; Wolf, Emily; Hughes, Meredith; Darrow, Jenna; Savin-Murphy, Janet; Peacock, Elizabeth

    2011-01-01

    Clinical trials exploring the effectiveness of counseling and psychotherapy in treatment of depression in school-age youth composed this meta-analysis. Results were synthesized using a random effects model for mean difference and mean gain effect size estimates. No effects of moderating variables were evident. Counseling and psychotherapy are…

  3. SETI and SEH (Statistical Equation for Habitables)

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-01-01

    The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.

  4. Characteristics of wind waves in shallow tidal basins and how they affect bed shear stress, bottom erosion, and the morphodynamic evolution of coupled marsh and mudflat landforms

    NASA Astrophysics Data System (ADS)

    Tommasini, Laura; Carniello, Luca; Goodwin, Guillaume; Mudd, Simon M.; Matticchio, Bruno; D'Alpaos, Andrea

    2017-04-01

    Wind-wave induced erosion is one of the main processes controlling the morphodynamic evolution of shallow tidal basins, because wind waves promote the erosion of subtidal platforms, tidal flats and salt marshes. Our study considered zero-, one-and two-dimensional wave models. First, we analyzed the relations between wave parameters, depth and bed shear stress with constant and variable wave period considering two zero-dimensional models based on the Young and Verhagen (1996), and Carniello et al. (2005, 2011) approaches. The first one is an empirical method that computes wave height and the variable wave period from wind velocity, fetch and water depth. The second one is based on the solution of wave action conservation equation, we use this second approach for computing the bottom shear stress and wave height, considering variable and constant (t=2s) wave period. Second, we compared the wave spectral model SWAN with a fully coupled Wind-Wave Tidal Model applied to a 1D rectangular domain. These models describe both the growth and propagation of wind waves. Finally, we applied the two-dimensional Wind Wave Tidal Model (WWTM) to six different configurations of the Venice lagoon considering the same boundary conditions and we evaluated the spatial variation of mean wave power density. The analysis with zero-dimensional models show that the effects of the different model assumptions on the wave period and on the wave height computation cannot be neglected. In particular, the relationships between bottom shear stress and water depth have different shapes. Two results emerge: first, the differences are higher for small depths, and then the maximum values reached with the Young and Verhagen (1996) approach are greater than the maximum values obtained with WWTM approach. The results obtained with two-dimensional models suggest that the wave height is different in particular for small fetch, this could be due to the different formulation of the wave period. Finally, the application of WWTM for the entire Lagoon basin underlines an increase of the mean power density in the last four centuries, in particular in the central-southern part of the lagoon between Chioggia and Malamocco inlets.

  5. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  6. Small violations of Bell inequalities for multipartite pure random states

    NASA Astrophysics Data System (ADS)

    Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.

    2018-05-01

    For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.

  7. The variable charge of andisols as affected by nanoparticles of rock phosphate and phosphate solubilizing bacteria

    NASA Astrophysics Data System (ADS)

    Arifin, M.; Nurlaeny, N.; Devnita, R.; Fitriatin, B. N.; Sandrawati, A.; Supriatna, Y.

    2018-02-01

    Andisols has a great potential as agriculture land, however, it has a high phosphorus retention, variable charge characteristics and high value of zero net charge or pH0. The research is aimed to study the effects of nanoparticles of rock phosphate (NPRP) and biofertilizer (phosphate solubilizing bacteria/PSB) on soil pH, pHo (zero point of charge, ZPC) and organic-C in one subgroup of Andisols, namely Acrudoxic Durudands, Ciater Region West Java. The research was conducted from October 2016 to February 2017 in Soil Physics Laboratory and Laboratory of Soil Chemistry and Fertility, Soil Science Department, Faculty of Agriculture, Universitas Padjadjaran. This experiment used a completely randomized factorial design, consisting of two factors and three replications. The first factor was nanoparticles of rock phosphate consist of 4 doses 0; 25; 50 and 75 g/1 kg soil and the second factor was biofertilizer dose consist of g/1 kg soil and without biofertilizer. Total treatment combinations were 8 with 3 replications, so there were 24 experimental plots. The results showed that in general NPRR and biofertilizer will decrease the value of soil pH throughout the incubation periods. There is an interaction between nanoparticles of rock phosphate and biofertilizer in decreasing pHo in the first month of incubation, but after 4-month incubation period, NPRP increased. Interaction between 75 g nanoparticles of rock phosphate with 1 g biofertilizer/1 kg soil in fourth months of incubation decreased soil organic-C to 3.35%.

  8. 40 CFR 180.5 - Zero tolerances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Zero tolerances. 180.5 Section 180.5... EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Definitions and Interpretative Regulations § 180.5 Zero tolerances. A zero tolerance means that no amount of the pesticide chemical may remain on the raw...

  9. 40 CFR 180.5 - Zero tolerances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Zero tolerances. 180.5 Section 180.5... EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Definitions and Interpretative Regulations § 180.5 Zero tolerances. A zero tolerance means that no amount of the pesticide chemical may remain on the raw...

  10. 40 CFR 180.5 - Zero tolerances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Zero tolerances. 180.5 Section 180.5... EXEMPTIONS FOR PESTICIDE CHEMICAL RESIDUES IN FOOD Definitions and Interpretative Regulations § 180.5 Zero tolerances. A zero tolerance means that no amount of the pesticide chemical may remain on the raw...

  11. Phenomenological picture of fluctuations in branching random walks

    NASA Astrophysics Data System (ADS)

    Mueller, A. H.; Munier, S.

    2014-10-01

    We propose a picture of the fluctuations in branching random walks, which leads to predictions for the distribution of a random variable that characterizes the position of the bulk of the particles. We also interpret the 1 /√{t } correction to the average position of the rightmost particle of a branching random walk for large times t ≫1 , computed by Ebert and Van Saarloos, as fluctuations on top of the mean-field approximation of this process with a Brunet-Derrida cutoff at the tip that simulates discreteness. Our analytical formulas successfully compare to numerical simulations of a particular model of a branching random walk.

  12. A probabilistic model of a porous heat exchanger

    NASA Technical Reports Server (NTRS)

    Agrawal, O. P.; Lin, X. A.

    1995-01-01

    This paper presents a probabilistic one-dimensional finite element model for heat transfer processes in porous heat exchangers. The Galerkin approach is used to develop the finite element matrices. Some of the submatrices are asymmetric due to the presence of the flow term. The Neumann expansion is used to write the temperature distribution as a series of random variables, and the expectation operator is applied to obtain the mean and deviation statistics. To demonstrate the feasibility of the formulation, a one-dimensional model of heat transfer phenomenon in superfluid flow through a porous media is considered. Results of this formulation agree well with the Monte-Carlo simulations and the analytical solutions. Although the numerical experiments are confined to parametric random variables, a formulation is presented to account for the random spatial variations.

  13. Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.

    PubMed

    Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J

    2012-01-01

    The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.

  14. Random attractor of non-autonomous stochastic Boussinesq lattice system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  15. Permutation modulation for quantization and information reconciliation in CV-QKD systems

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    2017-08-01

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.

  16. Effect of a stress management program on subjects with neck pain: A pilot randomized controlled trial.

    PubMed

    Metikaridis, T Damianos; Hadjipavlou, Alexander; Artemiadis, Artemios; Chrousos, George; Darviri, Christina

    2016-05-20

    Studies have shown that stress is implicated in the cause of neck pain (NP). The purpose of this study is to examine the effect of a simple, zero cost stress management program on patients suffering from NP. This study is a parallel-type randomized clinical study. People suffering from chronic non-specific NP were chosen randomly to participate in an eight week duration program of stress management (N= 28) (including diaphragmatic breathing, progressive muscle relaxation) or in a no intervention control condition (N= 25). Self-report measures were used for the evaluation of various variables at the beginning and at the end of the eight-week monitoring period. Descriptive and inferential statistic methods were used for the statistical analysis. At the end of the monitoring period, the intervention group showed a statistically significant reduction of stress and anxiety (p= 0.03, p= 0.01), report of stress related symptoms (p= 0.003), percentage of disability due to NP (p= 0.000) and NP intensity (p= 0.002). At the same time, daily routine satisfaction levels were elevated (p= 0.019). No statistically significant difference was observed in cortisol measurements. Stress management has positive effects on NP patients.

  17. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  18. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  19. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  20. Comparison of ventilator-associated pneumonia (VAP) rates between different ICUs: Implications of a zero VAP rate.

    PubMed

    Sundar, Krishna M; Nielsen, David; Sperry, Paul

    2012-02-01

    Ventilator-associated pneumonia (VAP) is associated with significant morbidity and mortality. Measures to reduce the incidence of VAP have resulted in institutions reporting a zero or near-zero VAP rates. The implications of zero VAP rates are unclear. This study was done to compare outcomes between two intensive care units (ICU) with one of them reporting a zero VAP rate. This study retrospectively compared VAP rates between two ICUs: Utah Valley Regional Medical Center (UVRMC) with 25 ICU beds and American Fork Hospital (AFH) with 9 ICU beds. Both facilities are under the same management and attended by a single group of intensivists. Both ICUs have similar nursing and respiratory staffing patterns. Both ICUs use the same intensive care program for reduction of VAP rates. ICU outcomes between AFH (reporting zero VAP rate) and UVRMC (VAP rate of 2.41/1000 ventilator days) were compared for the years 2007-2008. UVRMC VAP rates during 2007 and 2008 were 2.31/1000 ventilator days and 2.5/1000 ventilator days respectively compared to a zero VAP rate at AFH. The total days of ventilation, mean days of ventilation per patient and mean duration of ICU stay per patient was higher in the UVRMC group as compared to AFH ICU group. There was no significant difference in mean age and APACHE II score between ICU patients at UVRMC and AFH. There was no statistical difference in rates of VAP and mortality between UVRMC and AFH. During comparisons of VAP rate between institutions, a zero VAP rate needs to be considered in the context of overall ventilator days, mean durations of ventilator stay and ICU mortality. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  2. A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    1997-01-01

    Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.

  3. Two approximations of the present value distribution of a disability annuity

    NASA Astrophysics Data System (ADS)

    Spreeuw, Jaap

    2006-02-01

    The distribution function of the present value of a cash flow can be approximated by means of a distribution function of a random variable, which is also the present value of a sequence of payments, but with a simpler structure. The corresponding random variable has the same expectation as the random variable corresponding to the original distribution function and is a stochastic upper bound of convex order. A sharper upper bound can be obtained if more information about the risk is available. In this paper, it will be shown that such an approach can be adopted for disability annuities (also known as income protection policies) in a three state model under Markov assumptions. Benefits are payable during any spell of disability whilst premiums are only due whenever the insured is healthy. The quality of the two approximations is investigated by comparing the distributions obtained with the one derived from the algorithm presented in the paper by Hesselager and Norberg [Insurance Math. Econom. 18 (1996) 35-42].

  4. Zero Tolerance: Advantages and Disadvantages. Research Brief

    ERIC Educational Resources Information Center

    Walker, Karen

    2009-01-01

    What are the positives and negatives of zero tolerance? What should be considered when examining a school's program? Although there are no definitive definitions of zero tolerance, two commonly used ones are as follows: "Zero tolerance means that a school will automatically and severely punish a student for a variety of infractions" (American Bar…

  5. Predicting Blood Lactate Concentration and Oxygen Uptake from sEMG Data during Fatiguing Cycling Exercise.

    PubMed

    Ražanskas, Petras; Verikas, Antanas; Olsson, Charlotte; Viberg, Per-Arne

    2015-08-19

    This article presents a study of the relationship between electromyographic (EMG) signals from vastus lateralis, rectus femoris, biceps femoris and semitendinosus muscles, collected during fatiguing cycling exercises, and other physiological measurements, such as blood lactate concentration and oxygen consumption. In contrast to the usual practice of picking one particular characteristic of the signal, e.g., the median or mean frequency, multiple variables were used to obtain a thorough characterization of EMG signals in the spectral domain. Based on these variables, linear and non-linear (random forest) models were built to predict blood lactate concentration and oxygen consumption. The results showed that mean and median frequencies are sub-optimal choices for predicting these physiological quantities in dynamic exercises, as they did not exhibit significant changes over the course of our protocol and only weakly correlated with blood lactate concentration or oxygen uptake. Instead, the root mean square of the original signal and backward difference, as well as parameters describing the tails of the EMG power distribution were the most important variables for these models. Coefficients of determination ranging from R(2) = 0:77 to R(2) = 0:98 (for blood lactate) and from R(2) = 0:81 to R(2) = 0:97 (for oxygen uptake) were obtained when using random forest regressors.

  6. The Turbulent/Non-Turbulent Interface Bounding a Far-Wake

    NASA Technical Reports Server (NTRS)

    Bisset, David K.; Hunt, Julian C. R.; Rogers, Michael M.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The velocity fields of a turbulent wake behind a flat plate obtained from the direct numerical simulations of Moser et al. are used to study the structure of the flow in the intermittent zone where there are, alternately, regions of fully turbulent flow and non-turbulent velocity fluctuations either side of a thin randomly moving interface. Comparisons are made with a wake that is 'forced' by amplifying initial velocity fluctuations. There is also a random temperature field T in the flow; T varies between constant values of 0.0 and 1.0 on the sides of the wake. The value of the Reynolds number based on the centreplane mean velocity defect and halfwidth b of the wake is Re approx. = 2000. It is found that the thickness of the continuous interface is about equal to 0.07b, whereas the amplitude of fluctuations of the instantaneous interface displacement y(sub I)(t) is an order of magnitude larger, being about 0.5b. This explains why the mean statistics of vorticity in the intermittent zone can be calculated in terms of the probability distribution of y(sub I) and the instantaneous discontinuity in vorticity across the interface. When plotted as functions of y - y(sub I), the conditional mean velocity (U) and temperature (T) profiles show sharp jumps Delta(U) and Delta(T) at the interface adjacent to a thick zone where (U) and (T) vary much more slowly. Statistics for the vorticity and velocity variances, available in such detail only from DNS data, show how streamwise and spanwise components of vorticity are generated by vortex stretching in the bulges of the interface. Flow fields around the interface, analyzed in terms of the local streamline pattern, confirm previous results that the advancement of the vortical interface into the irrotational flow is driven by large-scale eddy motion. It is argued that because this is an inviscid mechanism the entrainment process is not sensitive to the value of Re, and that small-scale nibbling only plays a subsidiary role. While mean Reynolds stresses decrease gradually in the intermittent zone, conditional stresses are found to decrease sharply towards zero at the interface. Using one-point turbulence models applied to either unconditional or conditional statistics for the turbulent region and then averaged, the entrainment rate E(sub b) would, if calculated exactly, be zero. But if computed with standard computational methods, E(sub b) would be non-zero because of numerical diffusion. It is concluded that the current practice in statistical models of approximating entrainment by a diffusion process is computationally arbitrary and physically incorrect. An analysis shows how E(sub b) is related to Delta(U) and the jump in shear stress at the interface, and correspondingly to Delta(T) and the heat flux.

  7. Zero-Bounded Limits as a Special Case of the Squeeze Theorem for Evaluating Single-Variable and Multivariable Limits

    ERIC Educational Resources Information Center

    Gkioulekas, Eleftherios

    2013-01-01

    Many limits, typically taught as examples of applying the "squeeze" theorem, can be evaluated more easily using the proposed zero-bounded limit theorem. The theorem applies to functions defined as a product of a factor going to zero and a factor that remains bounded in some neighborhood of the limit. This technique is immensely useful…

  8. The impact of comorbidities on productivity loss in asthma patients.

    PubMed

    Ehteshami-Afshar, Solmaz; FitzGerald, J Mark; Carlsten, Christopher; Tavakoli, Hamid; Rousseau, Roxanne; Tan, Wan Cheng; Rolf, J Douglass; Sadatsafavi, Mohsen

    2016-08-26

    Health-related productivity loss is an important, yet overlooked, component of the economic burden of disease in asthma patients of a working age. We aimed at evaluating the effect of comorbidities on productivity loss among adult asthma patients. In a random sample of employed adults with asthma, we measured comorbidities using a validated self-administered comorbidity questionnaire (SCQ), as well as productivity loss, including absenteeism and presenteeism, using validated instruments. Productivity loss was measured in 2010 Canadian dollars ($). We used a two-part regression model to estimate the adjusted difference of productivity loss across levels of comorbidity, controlling for potential confounding variables. 284 adults with the mean age of 47.8 (SD 11.8) were included (68 % women). The mean SCQ score was 2.47 (SD 2.97, range 0-15) and the average productivity loss was $317.5 per week (SD $858.8). One-unit increase in the SCQ score was associated with 14 % (95 % CI 1.02-1.28) increase in the odds of reporting productivity loss, and 9.0 % (95 % CI 1.01-1.18) increase in productivity loss among those reported any loss of productivity. A person with a SCQ score of 15 had almost $1000 per week more productivity loss than a patient with a SCQ of zero. Our study deepens the evidence-base on the burden of asthma, by demonstrating that comorbidities substantially decrease productivity in working asthma patients. Asthma management strategies must be cognizant of the role of comorbidities to properly incorporate the effect of comorbidity and productivity loss in estimating the benefit of disease management strategies.

  9. Stochastic-analytic approach to the calculation of multiply scattered lidar returns

    NASA Astrophysics Data System (ADS)

    Gillespie, D. T.

    1985-08-01

    The problem of calculating the nth-order backscattered power of a laser firing short pulses at time zero into an homogeneous cloud with specified scattering and absorption parameters, is discussed. In the problem, backscattered power is measured at any time less than zero by a small receiver colocated with the laser and fitted with a forward looking conical baffle. Theoretical calculations are made on the premise that the laser pulse is composed of propagating photons which are scattered and absorbed by the cloud particles in a probabilistic manner. The effect of polarization was not taken into account in the calculations. An exact formula is derived for backscattered power, based on direct physical arguments together with a rigorous analysis of random variables. It is shown that, for values of n less than or equal to 2, the obtained formula is a well-behaved (3n-4) dimensionless integral. The computational feasibility of the integral formula is demonstrated for a model cloud of isotropically scattering particles. An analytical formula is obtained for a value of n = 2, and a Monte Carlo program was used to obtain numerical results for values of n = 3, . . ., 6.

  10. Two zero-flow pressure intercepts exist in autoregulating isolated skeletal muscle.

    PubMed

    Braakman, R; Sipkema, P; Westerhof, N

    1990-06-01

    The autoregulating vascular bed of the isolated canine extensor digitorum longus muscle was investigated for the possible existence of two positive zero-flow pressure axis intercepts, a tone-dependent one and a tone-independent one. An isolated preparation, perfused with autologous blood, was used to exclude effects of collateral flow and nervous and humoral regulation while autoregulation was left intact [mean autoregulatory gain 0.50 +/- 0.24 (SD)]. In a first series of experiments, the steady-state (zero flow) pressure axis intercept [mean 8.9 +/- 2.6 (SD) mmHg, tone independent] and the instantaneous (zero flow) pressure axis intercept [mean 28.5 +/- 9.9 (SD) mmHg, tone dependent] were determined as a function of venous pressure (range: 0-45 mmHg) and were independent of venous pressure until the venous pressure exceeded their respective values. Beyond this point the relations between the venous pressure and the steady-state and instantaneous pressure axis intercept followed the line of identity. The findings agree with the predictions of the vascular waterfall model. In a second series it was shown by means of administration of vasoactive drugs that the instantaneous pressure axis intercept is tone dependent, whereas the steady-state pressure axis intercept is not. It is concluded that there is a (proximal) tone-dependent zero-flow pressure at the arteriolar level and a (distal) tone-independent zero-flow pressure at the venous level.

  11. Variable versus conventional lung protective mechanical ventilation during open abdominal surgery: study protocol for a randomized controlled trial.

    PubMed

    Spieth, Peter M; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo

    2014-05-02

    General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary function and reduces systemic inflammatory response. However, it is currently not known whether patients undergoing open abdominal surgery might benefit from intraoperative variable ventilation. The PROtective VARiable ventilation trial ('PROVAR') is a single center, randomized controlled trial enrolling 50 patients who are planning for open abdominal surgery expected to last longer than 3 hours. PROVAR compares conventional (non-variable) lung protective ventilation (CV) with variable lung protective ventilation (VV) regarding pulmonary function and inflammatory response. The primary endpoint of the study is the forced vital capacity on the first postoperative day. Secondary endpoints include further lung function tests, plasma cytokine levels, spatial distribution of ventilation assessed by means of electrical impedance tomography and postoperative pulmonary complications. We hypothesize that VV improves lung function and reduces systemic inflammatory response compared to CV in patients receiving mechanical ventilation during general anesthesia for open abdominal surgery longer than 3 hours. PROVAR is the first randomized controlled trial aiming at intra- and postoperative effects of VV on lung function. This study may help to define the role of VV during general anesthesia requiring mechanical ventilation. Clinicaltrials.gov NCT01683578 (registered on September 3 3012).

  12. Ranking of patient and surgeons' perspectives for endpoints in randomized controlled trials--lessons learned from the POVATI trial [ISRCTN 60734227].

    PubMed

    Fischer, Lars; Deckert, Andreas; Diener, Markus K; Zimmermann, Johannes B; Büchler, Markus W; Seiler, Christoph M

    2011-10-01

    Surgical trials focus mainly on mortality and morbidity rates, which may be not the most important endpoints from the patient's perspective. Evaluation of expectations and needs of patients enrolled in clinical trials can be analyzed using a procedure called ranking. Within the Postsurgical Pain Outcome of Vertical and Transverse Abdominal Incision randomized trial (POVATI), the perspectives of participating patients and surgeons were assessed as well as the influence of the surgical intervention on patients' needs. All included patients of the POVATI trial were asked preoperatively and postoperatively to rank predetermined outcome variables concerning the upcoming surgical procedure (e.g., pain, complication, cosmetic result) hierarchically according to their importance. Preoperatively, the surgeons were asked to do the same. One hundred eighty two out of 200 randomized patients (71 females, 111 males; mean age 59 years) returned the ranking questionnaire preoperatively and 152 patients (67 females, 85 males; mean age 60 years) on the day of discharge. There were no differences between the two groups with respect to the distribution of ranking variables (p > 0.05). Thirty-five surgeons (7 residents, 6 fellows, and 22 consultants) completed the same ranking questionnaire. The order of the four most important ranking variables for both patients and surgeons were death, avoiding of postoperative complications, avoiding of intraoperative complications, and pain. Surgeons ranked the variable "cosmetic result" significantly as more important compared to patients (p = 0.034, Fisher's exact test). Patients and surgeons did not differ in ranking predetermined outcomes in the POVATI trial. Only the variable "cosmetic result" is significantly more important from the surgeon's than from the patient's perspective. Ranking of outcomes might be a beneficial tool and can be a proper addition to RCTs.

  13. [Computer-assisted education in problem-solving in neurology; a randomized educational study].

    PubMed

    Weverling, G J; Stam, J; ten Cate, T J; van Crevel, H

    1996-02-24

    To determine the effect of computer-based medical teaching (CBMT) as a supplementary method to teach clinical problem-solving during the clerkship in neurology. Randomized controlled blinded study. Academic Medical Centre, Amsterdam, the Netherlands. 103 Students were assigned at random to a group with access to CBMT and a control group. CBMT consisted of 20 computer-simulated patients with neurological diseases, and was permanently available during five weeks to students in the CBMT group. The ability to recognize and solve neurological problems was assessed with two free-response tests, scored by two blinded observers. The CBMT students scored significantly better on the test related to the CBMT cases (mean score 7.5 on a zero to 10 point scale; control group 6.2; p < 0.001). There was no significant difference on the control test not related to the problems practised with CBMT. CBMT can be an effective method for teaching clinical problem-solving, when used as a supplementary teaching facility during a clinical clerkship. The increased ability to solve problems learned by CBMT had no demonstrable effect on the performance with other neurological problems.

  14. Using Multigroup-Multiphase Latent State-Trait Models to Study Treatment-Induced Changes in Intra-Individual State Variability: An Application to Smokers' Affect.

    PubMed

    Geiser, Christian; Griffin, Daniel; Shiffman, Saul

    2016-01-01

    Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT.

  15. Using Multigroup-Multiphase Latent State-Trait Models to Study Treatment-Induced Changes in Intra-Individual State Variability: An Application to Smokers' Affect

    PubMed Central

    Geiser, Christian; Griffin, Daniel; Shiffman, Saul

    2016-01-01

    Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT. PMID:27499744

  16. Zero-crossing statistics for non-Markovian time series.

    PubMed

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  17. Zero-crossing statistics for non-Markovian time series

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  18. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    DTIC Science & Technology

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  19. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany

    PubMed Central

    Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun

    2017-01-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498

  20. Characterizing pixel and point patterns with a hyperuniformity disorder length

    NASA Astrophysics Data System (ADS)

    Chieco, A. T.; Dreyfus, R.; Durian, D. J.

    2017-09-01

    We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns—where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h =L /2 . Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h =(L /2 )(f /d ) for small f , and h =L /2 for f →1 . And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L ,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h (L ) versus L . We call this approach "hyperuniformity disorder length spectroscopy".

  1. Characterizing pixel and point patterns with a hyperuniformity disorder length.

    PubMed

    Chieco, A T; Dreyfus, R; Durian, D J

    2017-09-01

    We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns-where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h=L/2. Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h=(L/2)(f/d) for small f, and h=L/2 for f→1. And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h(L) versus L. We call this approach "hyperuniformity disorder length spectroscopy".

  2. Nonlinear probabilistic finite element models of laminated composite shells

    NASA Technical Reports Server (NTRS)

    Engelstad, S. P.; Reddy, J. N.

    1993-01-01

    A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.

  3. Parameter estimation method and updating of regional prediction equations for ungaged sites in the desert region of California

    USGS Publications Warehouse

    Barth, Nancy A.; Veilleux, Andrea G.

    2012-01-01

    The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.

  4. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (3) Zero drift. The analyzer zero-response drift during a one-hour period must be less than two percent of full-scale chart deflection on the lowest range used. The zero-response is defined as the mean... calibration or span gas. (2) Noise. The analyzer peak-to-peak response to zero and calibration or span gases...

  5. Amplitude- and rise-time-compensated filters

    DOEpatents

    Nowlin, Charles H.

    1984-01-01

    An amplitude-compensated rise-time-compensated filter for a pulse time-of-occurrence (TOOC) measurement system is disclosed. The filter converts an input pulse, having the characteristics of random amplitudes and random, non-zero rise times, to a bipolar output pulse wherein the output pulse has a zero-crossing time that is independent of the rise time and amplitude of the input pulse. The filter differentiates the input pulse, along the linear leading edge of the input pulse, and subtracts therefrom a pulse fractionally proportional to the input pulse. The filter of the present invention can use discrete circuit components and avoids the use of delay lines.

  6. Phase transition in nonuniform Josephson arrays: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lozovik, Yu. E.; Pomirchy, L. M.

    1994-01-01

    Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.

  7. Vapor concentration monitor

    DOEpatents

    Bayly, John G.; Booth, Ronald J.

    1977-01-01

    An apparatus for monitoring the concentration of a vapor, such as heavy water, having at least one narrow bandwidth in its absorption spectrum, in a sample gas such as air. The air is drawn into a chamber in which the vapor content is measured by means of its radiation absorption spectrum. High sensitivity is obtained by modulating the wavelength at a relatively high frequency without changing its optical path, while high stability against zero drift is obtained by the low frequency interchange of the sample gas to be monitored and of a reference sample. The variable HDO background due to natural humidity is automatically corrected.

  8. Atomic motion from the mean square displacement in a monatomic liquid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Duane C.; De Lorenzi-Venneri, Giulia; Chisolm, Eric D.

    V-T theory is constructed in the many-body Hamiltonian formulation, and is being developed as a novel approach to liquid dynamics theory. In this theory the liquid atomic motion consists of two contributions, normal mode vibrations in a single representative potential energy valley, and transits, which carry the system across boundaries between valleys. The mean square displacement time correlation function (the MSD) is a direct measure of the atomic motion, and our goal is to determine if the V-T formalism can produce a physically sensible account of this motion. We employ molecular dynamics (MD) data for a system representing liquid Na,more » and find the motion evolves in three successive time intervals: on the first 'vibrational' interval, the vibrational motion alone gives a highly accurate account of the MD data; on the second 'crossover' interval, the vibrational MSD saturates to a constant while the transit motion builds up from zero; on the third 'random walk' interval, the transit motion produces a purely diffusive random walk of the vibrational equilibrium positions. Furthermore, this motional evolution agrees with, and adds refinement to, the MSD atomic motion as described by current liquid dynamics theories.« less

  9. Atomic motion from the mean square displacement in a monatomic liquid

    DOE PAGES

    Wallace, Duane C.; De Lorenzi-Venneri, Giulia; Chisolm, Eric D.

    2016-04-08

    V-T theory is constructed in the many-body Hamiltonian formulation, and is being developed as a novel approach to liquid dynamics theory. In this theory the liquid atomic motion consists of two contributions, normal mode vibrations in a single representative potential energy valley, and transits, which carry the system across boundaries between valleys. The mean square displacement time correlation function (the MSD) is a direct measure of the atomic motion, and our goal is to determine if the V-T formalism can produce a physically sensible account of this motion. We employ molecular dynamics (MD) data for a system representing liquid Na,more » and find the motion evolves in three successive time intervals: on the first 'vibrational' interval, the vibrational motion alone gives a highly accurate account of the MD data; on the second 'crossover' interval, the vibrational MSD saturates to a constant while the transit motion builds up from zero; on the third 'random walk' interval, the transit motion produces a purely diffusive random walk of the vibrational equilibrium positions. Furthermore, this motional evolution agrees with, and adds refinement to, the MSD atomic motion as described by current liquid dynamics theories.« less

  10. Physical Unclonable Function Hardware Keys Utilizing Kirchhoff-Law Secure Key Exchange and Noise-Based Logic

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Kwan, Chiman

    Weak unclonable function (PUF) encryption key means that the manufacturer of the hardware can clone the key but not anybody else. Strong unclonable function (PUF) encryption key means that even the manufacturer of the hardware is unable to clone the key. In this paper, first we introduce an "ultra" strong PUF with intrinsic dynamical randomness, which is not only unclonable but also gets renewed to an independent key (with fresh randomness) during each use via the unconditionally secure key exchange. The solution utilizes the Kirchhoff-law-Johnson-noise (KLJN) method for dynamical key renewal and a one-time-pad secure key for the challenge/response process. The secure key is stored in a flash memory on the chip to provide tamper-resistance and nonvolatile storage with zero power requirements in standby mode. Simplified PUF keys are shown: a strong PUF utilizing KLJN protocol during the first run and noise-based logic (NBL) hyperspace vector string verification method for the challenge/response during the rest of its life or until it is re-initialized. Finally, the simplest PUF utilizes NBL without KLJN thus it can be cloned by the manufacturer but not by anybody else.

  11. Regression Analysis with Dummy Variables: Use and Interpretation.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Oliver, J. Dale

    1986-01-01

    Multiple regression analysis (MRA) may be used when both continuous and categorical variables are included as independent research variables. The use of MRA with categorical variables involves dummy coding, that is, assigning zeros and ones to levels of categorical variables. Caution is urged in results interpretation. (Author/CH)

  12. Essays on parametric and nonparametric modeling and estimation with applications to energy economics

    NASA Astrophysics Data System (ADS)

    Gao, Weiyu

    My dissertation research is composed of two parts: a theoretical part on semiparametric efficient estimation and an applied part in energy economics under different dynamic settings. The essays are related in terms of their applications as well as the way in which models are constructed and estimated. In the first essay, efficient estimation of the partially linear model is studied. We work out the efficient score functions and efficiency bounds under four stochastic restrictions---independence, conditional symmetry, conditional zero mean, and partially conditional zero mean. A feasible efficient estimation method for the linear part of the model is developed based on the efficient score. A battery of specification test that allows for choosing between the alternative assumptions is provided. A Monte Carlo simulation is also conducted. The second essay presents a dynamic optimization model for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from different sources to estimate the oil production cost function and the revenue function. We pay particular attention to the dynamic aspect of the oil production by employing petroleum-engineering software to simulate the interaction between control variables and reservoir state variables. Optimal solutions are studied under different scenarios to account for the possible changes in the exogenous variables and the uncertainty about the forecasts. The third essay examines the effect of oil price volatility on the level of innovation displayed by the U.S. economy. A measure of innovation is calculated by decomposing an output-based Malmquist index. We also construct a nonparametric measure for oil price volatility. Technical change and oil price volatility are then placed in a VAR system with oil price and a variable indicative of monetary policy. The system is estimated and analyzed for significant relationships. We find that oil price volatility displays a significant negative effect on innovation. A key point of this analysis lies in the fact that we impose no functional forms for technologies and the methods employed keep technical assumptions to a minimum.

  13. Evaluation of the Use of Zero-Augmented Regression Techniques to Model Incidence of Campylobacter Infections in FoodNet.

    PubMed

    Tremblay, Marlène; Crim, Stacy M; Cole, Dana J; Hoekstra, Robert M; Henao, Olga L; Döpfer, Dörte

    2017-10-01

    The Foodborne Diseases Active Surveillance Network (FoodNet) is currently using a negative binomial (NB) regression model to estimate temporal changes in the incidence of Campylobacter infection. FoodNet active surveillance in 483 counties collected data on 40,212 Campylobacter cases between years 2004 and 2011. We explored models that disaggregated these data to allow us to account for demographic, geographic, and seasonal factors when examining changes in incidence of Campylobacter infection. We hypothesized that modeling structural zeros and including demographic variables would increase the fit of FoodNet's Campylobacter incidence regression models. Five different models were compared: NB without demographic covariates, NB with demographic covariates, hurdle NB with covariates in the count component only, hurdle NB with covariates in both zero and count components, and zero-inflated NB with covariates in the count component only. Of the models evaluated, the nonzero-augmented NB model with demographic variables provided the best fit. Results suggest that even though zero inflation was not present at this level, individualizing the level of aggregation and using different model structures and predictors per site might be required to correctly distinguish between structural and observational zeros and account for risk factors that vary geographically.

  14. Clinical Malaria Transmission Trends and Its Association with Climatic Variables in Tubu Village, Botswana: A Retrospective Analysis.

    PubMed

    Chirebvu, Elijah; Chimbari, Moses John; Ngwenya, Barbara Ntombi; Sartorius, Benn

    2016-01-01

    Good knowledge on the interactions between climatic variables and malaria can be very useful for predicting outbreaks and preparedness interventions. We investigated clinical malaria transmission patterns and its temporal relationship with climatic variables in Tubu village, Botswana. A 5-year retrospective time series data analysis was conducted to determine the transmission patterns of clinical malaria cases at Tubu Health Post and its relationship with rainfall, flood discharge, flood extent, mean minimum, maximum and average temperatures. Data was obtained from clinical records and respective institutions for the period July 2005 to June 2010, presented graphically and analysed using the Univariate ANOVA and Pearson cross-correlation coefficient tests. Peak malaria season occurred between October and May with the highest cumulative incidence of clinical malaria cases being recorded in February. Most of the cases were individuals aged >5 years. Associations between the incidence of clinical malaria cases and several factors were strong at lag periods of 1 month; rainfall (r = 0.417), mean minimum temperature (r = 0.537), mean average temperature (r = 0.493); and at lag period of 6 months for flood extent (r = 0.467) and zero month for flood discharge (r = 0.497). The effect of mean maximum temperature was strongest at 2-month lag period (r = 0.328). Although malaria transmission patterns varied from year to year the trends were similar to those observed in sub-Saharan Africa. Age group >5 years experienced the greatest burden of clinical malaria probably due to the effects of the national malaria elimination programme. Rainfall, flood discharge and extent, mean minimum and mean average temperatures showed some correlation with the incidence of clinical malaria cases.

  15. Energy density and variability in abundance of pigeon guillemot prey: Support for the quality-variability trade-off hypothesis

    USGS Publications Warehouse

    Litzow, Michael A.; Piatt, John F.; Abookire, Alisa A.; Robards, Martin D.

    2004-01-01

    1. The quality-variability trade-off hypothesis predicts that (i) energy density (kJ g-1) and spatial-temporal variability in abundance are positively correlated in nearshore marine fishes; and (ii) prey selection by a nearshore piscivore, the pigeon guillemot (Cepphus columba Pallas), is negatively affected by variability in abundance. 2. We tested these predictions with data from a 4-year study that measured fish abundance with beach seines and pigeon guillemot prey utilization with visual identification of chick meals. 3. The first prediction was supported. Pearson's correlation showed that fishes with higher energy density were more variable on seasonal (r = 0.71) and annual (r = 0.66) time scales. Higher energy density fishes were also more abundant overall (r = 0.85) and more patchy at a scale of 10s of km (r = 0.77). 4. Prey utilization by pigeon guillemots was strongly non-random. Relative preference, defined as the difference between log-ratio transformed proportions of individual prey taxa in chick diets and beach seine catches, was significantly different from zero for seven of the eight main prey categories. 5. The second prediction was also supported. We used principal component analysis (PCA) to summarize variability in correlated prey characteristics (energy density, availability and variability in abundance). Two PCA scores explained 32% of observed variability in pigeon guillemot prey utilization. Seasonal variability in abundance was negatively weighted by these PCA scores, providing evidence of risk-averse selection. Prey availability, energy density and km-scale variability in abundance were positively weighted. 6. Trophic interactions are known to create variability in resource distribution in other systems. We propose that links between resource quality and the strength of trophic interactions may produce resource quality-variability trade-offs.

  16. On the probability of violations of Fourier's law for heat flow in small systems observed for short times

    NASA Astrophysics Data System (ADS)

    Evans, Denis J.; Searles, Debra J.; Williams, Stephen R.

    2010-01-01

    We study the statistical mechanics of thermal conduction in a classical many-body system that is in contact with two thermal reservoirs maintained at different temperatures. The ratio of the probabilities, that when observed for a finite time, the time averaged heat flux flows in and against the direction required by Fourier's Law for heat flow, is derived from first principles. This result is obtained using the transient fluctuation theorem. We show that the argument of that theorem, namely, the dissipation function is, close to equilibrium, equal to a microscopic expression for the entropy production. We also prove that if transient time correlation functions of smooth zero mean variables decay to zero at long times, the system will relax to a unique nonequilibrium steady state, and for this state, the thermal conductivity must be positive. Our expressions are tested using nonequilibrium molecular dynamics simulations of heat flow between thermostated walls.

  17. Opening of DNA chain due to force applied on different locations.

    PubMed

    Singh, Amar; Modi, Tushar; Singh, Navin

    2016-09-01

    We consider a homogeneous DNA molecule and investigate the effect of random force applied on the unzipping profile of the molecule. How the critical force varies as a function of the chain length or number of base pairs is the objective of this study. In general, the ratio of the critical forces that is applied on the middle of the chain to that which is applied on one of the ends is two. Our study shows that this ratio depends on the length of the chain. This means that the force which is applied to a point can be experienced by a section of the chain. Beyond a length, the base pairs have no information about the applied force. In the case when the chain length is shorter than this length, this ratio may vary. Only in the case when the chain length exceeds a critical length, this ratio is found to be two. Based on the de Gennes formulation, we developed a method to calculate these forces at zero temperature. The exact results at zero temperature match numerical calculations.

  18. Quantitation of Bone Growth Rate Variability in Rats Exposed to Micro-(near zero G) and Macrogravity (2G)

    NASA Technical Reports Server (NTRS)

    Bromage, Timothy G.; Doty, Stephen B.; Smolyar, Igor; Holton, Emily

    1997-01-01

    Our stated primary objective is to quantify the growth rate variability of rat lamellar bone exposed to micro- (near zero G: e.g., Cosmos 1887 & 2044; SLS-1 & SLS-2) and macrogravity (2G). The primary significance of the proposed work is that an elegant method will be established that unequivocally characterizes the morphological consequences of gravitational factors on developing bone. The integrity of this objective depends upon our successful preparation of thin sections suitable for imaging individual bone lamellae, and our imaging and quantitation of growth rate variability in populations of lamellae from individual bone samples.

  19. Errors of five-day mean surface wind and temperature conditions due to inadequate sampling

    NASA Technical Reports Server (NTRS)

    Legler, David M.

    1991-01-01

    Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.

  20. Seabed mapping and characterization of sediment variability using the usSEABED data base

    USGS Publications Warehouse

    Goff, J.A.; Jenkins, C.J.; Jeffress, Williams S.

    2008-01-01

    We present a methodology for statistical analysis of randomly located marine sediment point data, and apply it to the US continental shelf portions of usSEABED mean grain size records. The usSEABED database, like many modern, large environmental datasets, is heterogeneous and interdisciplinary. We statistically test the database as a source of mean grain size data, and from it provide a first examination of regional seafloor sediment variability across the entire US continental shelf. Data derived from laboratory analyses ("extracted") and from word-based descriptions ("parsed") are treated separately, and they are compared statistically and deterministically. Data records are selected for spatial analysis by their location within sample regions: polygonal areas defined in ArcGIS chosen by geography, water depth, and data sufficiency. We derive isotropic, binned semivariograms from the data, and invert these for estimates of noise variance, field variance, and decorrelation distance. The highly erratic nature of the semivariograms is a result both of the random locations of the data and of the high level of data uncertainty (noise). This decorrelates the data covariance matrix for the inversion, and largely prevents robust estimation of the fractal dimension. Our comparison of the extracted and parsed mean grain size data demonstrates important differences between the two. In particular, extracted measurements generally produce finer mean grain sizes, lower noise variance, and lower field variance than parsed values. Such relationships can be used to derive a regionally dependent conversion factor between the two. Our analysis of sample regions on the US continental shelf revealed considerable geographic variability in the estimated statistical parameters of field variance and decorrelation distance. Some regional relationships are evident, and overall there is a tendency for field variance to be higher where the average mean grain size is finer grained. Surprisingly, parsed and extracted noise magnitudes correlate with each other, which may indicate that some portion of the data variability that we identify as "noise" is caused by real grain size variability at very short scales. Our analyses demonstrate that by applying a bias-correction proxy, usSEABED data can be used to generate reliable interpolated maps of regional mean grain size and sediment character. 

  1. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in SW-RT: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Pfeilsticker, K.; Davis, A.; Marshak, A.; Suszcynsky, D. M.; Buldryrev, S.; Barker, H.

    2001-12-01

    2-stream RT models, as used in all current GCMs, are mathematically equivalent to standard diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. In other words, after the conventional van de Hulst rescaling by 1/(1-g) in R3 and also by (1-g) in t, solar photons follow convoluted fractal trajectories in the atmosphere. For instance, we know that transmitted light is typically scattered about (1-g)τ 2 times while reflected light is scattered on average about τ times, where τ is the optical depth of the column. The space/time spread of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows from directly from first principles (the RT equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the '1-g' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as anomalous diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM RT parameterization.

  2. Free variable selection QSPR study to predict 19F chemical shifts of some fluorinated organic compounds using Random Forest and RBF-PLS methods

    NASA Astrophysics Data System (ADS)

    Goudarzi, Nasser

    2016-04-01

    In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.

  3. Volcanoes Distribution in Linear Segmentation of Mariana Arc

    NASA Astrophysics Data System (ADS)

    Andikagumi, H.; Macpherson, C.; McCaffrey, K. J. W.

    2016-12-01

    A new method has been developed to describe better volcanoes distribution pattern within Mariana Arc. A previous study assumed the distribution of volcanoes in the Mariana Arc is described by a small circle distribution which reflects the melting processes in a curved subduction zone. The small circle fit to this dataset used in the study, comprised 12 -mainly subaerial- volcanoes from Smithsonian Institute Global Volcanism Program, was reassessed by us to have a root-mean-square misfit of 2.5 km. The same method applied to a more complete dataset from Baker et al. (2008), consisting 37 subaerial and submarine volcanoes, resulted in an 8.4 km misfit. However, using the Hough Transform method on the larger dataset, lower misfits of great circle segments were achieved (3.1 and 3.0 km) for two possible segments combination. The results indicate that the distribution of volcanoes in the Mariana Arc is better described by a great circle pattern, instead of small circle. Variogram and cross-variogram analysis on volcano spacing and volume shows that there is spatial correlation between volcanoes between 420 and 500 km which corresponds to the maximum segmentation lengths from Hough Transform (320 km). Further analysis of volcano spacing by the coefficient of variation (Cv), shows a tendency toward not-random distribution as the Cv values are closer to zero than one. These distributions are inferred to be associated with the development of normal faults at the back arc as their Cv values also tend towards zero. To analyse whether volcano spacing is random or not, Cv values were simulated using a Monte Carlo method with random input. Only the southernmost segment has allowed us to reject the null hypothesis that volcanoes are randomly spaced at 95% confidence level by 0.007 estimated probability. This result shows infrequent regularity in volcano spacing by chance so that controlling factor in lithospheric scale should be analysed with different approach (not from random number generator). Sunda Arc which has been studied to have en enchelon segmentation and larger number of volcanoes will be further studied to understand particular upper plate influence in volcanoes distribution.

  4. On the fluctuations of sums of independent random variables.

    PubMed

    Feller, W

    1969-07-01

    If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.

  5. Exploring the effects of roadway characteristics on the frequency and severity of head-on crashes: case studies from Malaysian federal roads.

    PubMed

    Hosseinpour, Mehdi; Yahaya, Ahmad Shukri; Sadullah, Ahmad Farhan

    2014-01-01

    Head-on crashes are among the most severe collision types and of great concern to road safety authorities. Therefore, it justifies more efforts to reduce both the frequency and severity of this collision type. To this end, it is necessary to first identify factors associating with the crash occurrence. This can be done by developing crash prediction models that relate crash outcomes to a set of contributing factors. This study intends to identify the factors affecting both the frequency and severity of head-on crashes that occurred on 448 segments of five federal roads in Malaysia. Data on road characteristics and crash history were collected on the study segments during a 4-year period between 2007 and 2010. The frequency of head-on crashes were fitted by developing and comparing seven count-data models including Poisson, standard negative binomial (NB), random-effect negative binomial, hurdle Poisson, hurdle negative binomial, zero-inflated Poisson, and zero-inflated negative binomial models. To model crash severity, a random-effect generalized ordered probit model (REGOPM) was used given a head-on crash had occurred. With respect to the crash frequency, the random-effect negative binomial (RENB) model was found to outperform the other models according to goodness of fit measures. Based on the results of the model, the variables horizontal curvature, terrain type, heavy-vehicle traffic, and access points were found to be positively related to the frequency of head-on crashes, while posted speed limit and shoulder width decreased the crash frequency. With regard to the crash severity, the results of REGOPM showed that horizontal curvature, paved shoulder width, terrain type, and side friction were associated with more severe crashes, whereas land use, access points, and presence of median reduced the probability of severe crashes. Based on the results of this study, some potential countermeasures were proposed to minimize the risk of head-on crashes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  7. Stochastic analysis of unsaturated steady flows above the water table

    NASA Astrophysics Data System (ADS)

    Severino, Gerardo; Scarfato, Maddalena; Comegna, Alessandro

    2017-08-01

    Steady flow takes place into a three-dimensional partially saturated porous medium where, due to their spatial variability, the saturated conductivity Ks, and the relative conductivity Kr are modeled as random space functions (RSF)s. As a consequence, the flow variables (FVs), i.e., pressure-head and specific flux, are also RSFs. The focus of the present paper consists into quantifying the uncertainty of the FVs above the water table. The simple expressions (most of which in closed form) of the second-order moments pertaining to the FVs allow one to follow the transitional behavior from the zone close to the water table (where the FVs are nonstationary), till to their far-field limit (where the FVs become stationary RSFs). In particular, it is shown how the stationary limits (and the distance from the water table at which stationarity is attained) depend upon the statistical structure of the RSFs Ks, Kr, and the infiltrating rate. The mean pressure head ><Ψ>> has been also computed, and it is expressed as <Ψ>=Ψ0>(1+ψ>), being ψ a characteristic heterogeneity function which modifies the zero-order approximation Ψ0 of the pressure head (valid for a vadose zone of uniform soil properties) to account for the spatial variability of Ks and Kr. Two asymptotic limits, i.e., close (near field) and away (far field) from the water table, are derived into a very general manner, whereas the transitional behavior of ψ between the near/far field can be determined after specifying the shape of the various input soil properties. Besides the theoretical interest, results of the present paper are useful for practical purposes, as well. Indeed, the model is tested against to real data, and in particular it is shown how it is possible for the specific case study to grasp the behavior of the FVs within an environment (i.e., the vadose zone close to the water table) which is generally very difficult to access by direct inspection.

  8. Effect of Vitamin E on Oxaliplatin-induced Peripheral Neuropathy Prevention: A Randomized Controlled Trial.

    PubMed

    Salehi, Zeinab; Roayaei, Mahnaz

    2015-01-01

    Peripheral neuropathy is one of the most important limitations of oxaliplatin base regimen, which is the standard for the treatment of colorectal cancer. Evidence has shown that Vitamin E may be protective in chemotherapy-induced peripheral neuropathy. The aim of this study is to evaluate the effect of Vitamin E administration on prevention of oxaliplatin-induced peripheral neuropathy in patients with colorectal cancer. This was a prospective randomized, controlled clinical trial. Patients with colorectal cancer and scheduled to receive oxaliplatin-based regimens were enrolled in this study. Enrolled patients were randomized into two groups. The first group received Vitamin E at a dose of 400 mg daily and the second group observed, until after the sixth course of the oxaliplatin regimen. For oxaliplatin-induced peripheral neuropathy assessment, we used the symptom experience diary questionnaire that completed at baseline and after the sixth course of chemotherapy. Only patients with a score of zero at baseline were eligible for this study. Thirty-two patients were randomized to the Vitamin E group and 33 to the control group. There was no difference in the mean peripheral neuropathy score changes (after - before) between two groups, after sixth course of the oxaliplatin base regimen (mean difference [after - before] of Vitamin E group = 6.37 ± 2.85, control group = 6.57 ± 2.94; P = 0.78). Peripheral neuropathy scores were significantly increased after intervention compared with a base line in each group (P < 0.001). The results from this current trial demonstrate a lack of benefit for Vitamin E in preventing oxaliplatin-induced peripheral neuropathy.

  9. Locking of correlated neural activity to ongoing oscillations

    PubMed Central

    Helias, Moritz

    2017-01-01

    Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis. PMID:28604771

  10. Turbofan noise generation. Volume 1: Analysis

    NASA Astrophysics Data System (ADS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-07-01

    Computer programs were developed which calculate the in-duct acoustic modes excited by a fan/stator stae operating at subsonic tip speed. Three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotors blades; and (3) sound generated by the stator vanes interacting with the mean velocity deficit wakes of the rotor blades. The fan/stator stage is modeled as an ensemble of blades and vanes of zero camber and thickness enclosed within an infinite hard-walled annular duct. Turbulence drawn into or generated within the inlet duct is modeled as nonhomogeneous and anisotropic random fluid motion, superimposed upon a uniform axial mean flow, and convected with that flow. Equations for the duct mode amplitudes, or expected values of the amplitudes, are derived.

  11. Turbofan noise generation. Volume 1: Analysis

    NASA Technical Reports Server (NTRS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-01-01

    Computer programs were developed which calculate the in-duct acoustic modes excited by a fan/stator stae operating at subsonic tip speed. Three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotors blades; and (3) sound generated by the stator vanes interacting with the mean velocity deficit wakes of the rotor blades. The fan/stator stage is modeled as an ensemble of blades and vanes of zero camber and thickness enclosed within an infinite hard-walled annular duct. Turbulence drawn into or generated within the inlet duct is modeled as nonhomogeneous and anisotropic random fluid motion, superimposed upon a uniform axial mean flow, and convected with that flow. Equations for the duct mode amplitudes, or expected values of the amplitudes, are derived.

  12. Accounting for crustal magnetization in models of the core magnetic field

    NASA Technical Reports Server (NTRS)

    Jackson, Andrew

    1990-01-01

    The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.

  13. Effect of Early ≤ 3 Mets (Metabolic Equivalent of Tasks) of Physical Activity on Patient's Outcome after Cardiac Surgery.

    PubMed

    Tariq, Muhammad Iqbal; Khan, Asif Ali; Khalid, Zara; Farheen, Hania; Siddiqi, Furqan Ahmed; Amjad, Imran

    2017-08-01

    To determine the effect of <3 Mets (Metabolic Equivalent of Tasks) of physical activity on zero postoperative days for improving hemodynamic and respiratory parameters of patients after cardiac surgeries. Randomized control trial. BARMWTHospital, Rawalpindi, from March to August 2015. Arandomized controlled trial was conducted on 174 CABG and valvular heart disease patients undergoing cardiac surgical procedures. After selection of sample via non-probability purposive sampling, they were randomly allocated into interventional group (n=87) and control group (n=87). Treatment protocol for experimental group was ≤3 Mets of physical activity, i.e. chest physiotherapy, sitting over edge of bed, standing and sitting on chair at bedside, on zero postoperative day but the control group was treated with conventional treatment on first postoperative day. Pre- and post-treatment assessment was done in control and interventional groups on both zero and first postoperative days. Data was analyzed on SPSS version 21. The patients' mean age was 51.86 ±13.76 years. Male to female ratio was 132:42. Statistically significant differences in respiratory rate and SpO2 (p=0.000 and 0.000, respectively) were found between both groups. Among ABG's, PCO2 and pH showed significant differences with p values of 0.039 and <0.001, respectively. No significant differences were observed between both groups regarding electrolytes (Na+, K+, Cl-, p-values of 0.361, 0.575 and 0.120 respectively) and creatinine (p=0.783). Marked improvement in oxygen saturation, dyspnea and a fall in systolic BPwas seen in interventional group. There was also observed to be a reduction in the length of ICU stay among interventional group patients as frequency with percentage of total stay was compared to control group. Early physical activity (≤3 METS) post-cardiac surgeries prevent respiratory complications through improvement in dyspnea, respiratory rate, and oxygen saturation.

  14. [Fire behavior of Mongolian oak leaves fuel-bed under no-wind and zero-slope conditions. I. Factors affecting fire spread rate and modeling].

    PubMed

    Jin, Sen; Liu, Bo-Fei; Di, Xue-Ying; Chu, Teng-Fei; Zhang, Ji-Li

    2012-01-01

    Aimed to understand the fire behavior of Mongolian oak leaves fuel-bed under field condition, the leaves of a secondary Mongolian oak forest in Northeast Forestry University experimental forest farm were collected and brought into laboratory to construct fuel-beds with varied loading, height, and moisture content, and a total of 100 experimental fires were burned under no-wind and zero-slope conditions. It was observed that the fire spread rate of the fuel-beds was less than 0.5 m x min(-1). Fuel-bed loading, height, and moisture contents all had significant effects on the fire spread rate. The effect of fuel-bed moisture content on the fire spread had no significant correlations with fuel-bed loading and height, but the effect of fuel-bed height was related to the fuel-bed loading. The packing ratio of fuel-beds had less effect on the fire spread rate. Taking the fuel-bed loading, height, and moisture content as predictive variables, a prediction model for the fire spread rate of Mongolian oak leaves fuel-bed was established, which could explain 83% of the variance of the fire spread rate, with a mean absolute error 0.04 m x min(-1) and a mean relative error less than 17%.

  15. A Conceptual Framework for Representing Human Behavior Characteristics in a System of Systems Agent-Based Survivability Simulation

    DTIC Science & Technology

    2010-11-22

    fuzzy matrix converges to a “zero-one” matrix. The values of “0” and “1” simply means that two edges of the network with “1” have a crisp ...fuzzy matrix converges to a “zero-one” matrix. The values of “0” and “1” simply means that two edges of the network with “1” have a crisp connectivity...converges to a “zero-one” matrix. The values of “0” and “1” simply means that two edges of the network with “1” have a crisp connectivity (and

  16. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2016-04-01

    In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.

  17. Computer Analysis of 400 HZ Aircraft Electrical Generator Test Data.

    DTIC Science & Technology

    1980-06-01

    Data Acquisition System. ............ 6 3 Voltage Waveform with Data Points. ....... 19 14 Zero Crossover Interpolation. ........ 20 5 Numerical...difference between successive positive-sloped zero crossovers of the waveform. However, the exact time of zero crossover is not known. This is because...data sampling and the generator output are not synchronized. This unsynchronization means that data points which correspond with an exact zero crossover

  18. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  19. Dental erosion prevalence and associated risk indicators among preschool children in Athens, Greece.

    PubMed

    Mantonanaki, Magdalini; Koletsi-Kounari, Haroula; Mamai-Homata, Eleni; Papaioannou, William

    2013-03-01

    The aims of the study were to investigate dental erosion prevalence, distribution and severity in Greek preschool children attending public kindergartens in the prefecture of Attica, Greece and to determine the effect of dental caries, oral hygiene level, socio-economic factors, dental behavior, erosion related medication and chronic illness. A random and stratified sample of 605 Greek preschool children was clinically examined for dental erosion using the Basic Erosive Wear Examination Index (ΒΕWE). Dental caries (dmfs) and Simplified Debris Index were also recorded. The data concerning possible risk indicators were derived by a questionnaire. Zero-inflated Poisson regression was generated to test the predictive effects of the independent variables on dental erosion. The prevalence of dental erosion was 78.8 %, and the mean and SE of BEWE index was 3.64 ± 0.15. High monthly family income was positively related to ΒΕWE cumulative scores [RR = 1.204 (1.016-1.427)], while high maternal education level [RR = 0.872 (0.771-0.986)] and poor oral hygiene level [DI-s, RR = 0.584 (0.450-0.756)] showed a negative association. Dental erosion is a common oral disease in Greek preschool children in Attica, related to oral hygiene and socio-economic factors. Programs aimed at erosion prevention should begin at an early age for all children.

  20. Hyperbolic heat conduction, effective temperature, and third law for nonequilibrium systems with heat flux

    NASA Astrophysics Data System (ADS)

    Sobolev, S. L.

    2018-02-01

    Some analogies between different nonequilibrium heat conduction models, particularly random walk, the discrete variable model, and the Boltzmann transport equation with the single relaxation time approximation, have been discussed. We show that, under an assumption of a finite value of the heat carrier velocity, these models lead to the hyperbolic heat conduction equation and the modified Fourier law with relaxation term. Corresponding effective temperature and entropy have been introduced and analyzed. It has been demonstrated that the effective temperature, defined as a geometric mean of the kinetic temperatures of the heat carriers moving in opposite directions, acts as a criterion for thermalization and is a nonlinear function of the kinetic temperature and heat flux. It is shown that, under highly nonequilibrium conditions when the heat flux tends to its maximum possible value, the effective temperature, heat capacity, and local entropy go to zero even at a nonzero equilibrium temperature. This provides a possible generalization of the third law to nonequilibrium situations. Analogies and differences between the proposed effective temperature and some other definitions of a temperature in nonequilibrium state, particularly for active systems, disordered semiconductors under electric field, and adiabatic gas flow, have been shown and discussed. Illustrative examples of the behavior of the effective temperature and entropy during nonequilibrium heat conduction in a monatomic gas and a strong shockwave have been analyzed.

  1. The statistics of Pearce element diagrams and the Chayes closure problem

    NASA Astrophysics Data System (ADS)

    Nicholls, J.

    1988-05-01

    Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.

  2. Asymptotic analysis of the density of states in random matrix models associated with a slowly decaying weight

    NASA Astrophysics Data System (ADS)

    Kuijlaars, A. B. J.

    2001-08-01

    The asymptotic behavior of polynomials that are orthogonal with respect to a slowly decaying weight is very different from the asymptotic behavior of polynomials that are orthogonal with respect to a Freud-type weight. While the latter has been extensively studied, much less is known about the former. Following an earlier investigation into the zero behavior, we study here the asymptotics of the density of states in a unitary ensemble of random matrices with a slowly decaying weight. This measure is also naturally connected with the orthogonal polynomials. It is shown that, after suitable rescaling, the weak limit is the same as the weak limit of the rescaled zeros.

  3. Proceedings of the Symposium on the Interface of Computer Science and Statistics (17th) Held in Lexington, Kentucky on 17-19 March 1985.

    DTIC Science & Technology

    1986-03-04

    satisfied, but the availability of the machinery will entice developments to appear over the next few years. TABULATION AND DISPLAY To gain access to...independent, identicallyof the optimal predictor and the mean square distributed random variables with mean 0 and difference between the optimal forecast... optimal forecast (the conditional mean of YT+, given qute approximation to a2 1 j For the% Yl ~ ~ ~ ~ ~ ~ ~ . ... ,FT) thehtm-[ ]# rn-1 Y1...;,YT) and

  4. Planetarium instructional efficacy: A research synthesis

    NASA Astrophysics Data System (ADS)

    Brazell, Bruce D.

    The purpose of the current study was to explore the instructional effectiveness of the planetarium in astronomy education using meta-analysis. A review of the literature revealed 46 studies related to planetarium efficacy. However, only 19 of the studies satisfied selection criteria for inclusion in the meta-analysis. Selected studies were then subjected to coding procedures, which extracted information such as subject characteristics, experimental design, and outcome measures. From these data, 24 effect sizes were calculated in the area of student achievement and five effect sizes were determined in the area of student attitudes using reported statistical information. Mean effect sizes were calculated for both the achievement and the attitude distributions. Additionally, each effect size distribution was subjected to homogeneity analysis. The attitude distribution was found to be homogeneous with a mean effect size of -0.09, which was not significant, p = .2535. The achievement distribution was found to be heterogeneous with a statistically significant mean effect size of +0.28, p < .05. Since the achievement distribution was heterogeneous, the analog to the ANOVA procedure was employed to explore variability in this distribution in terms of the coded variables. The analog to the ANOVA procedure revealed that the variability introduced by the coded variables did not fully explain the variability in the achievement distribution beyond subject-level sampling error under a fixed effects model. Therefore, a random effects model analysis was performed which resulted in a mean effect size of +0.18, which was not significant, p = .2363. However, a large random effect variance component was determined indicating that the differences between studies were systematic and yet to be revealed. The findings of this meta-analysis showed that the planetarium has been an effective instructional tool in astronomy education in terms of student achievement. However, the meta-analysis revealed that the planetarium has not been a very effective tool for improving student attitudes towards astronomy.

  5. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  6. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  7. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  8. Tangent linear super-parameterization: attributable, decomposable moist processes for tropical variability studies

    NASA Astrophysics Data System (ADS)

    Mapes, B. E.; Kelly, P.; Song, S.; Hu, I. K.; Kuang, Z.

    2015-12-01

    An economical 10-layer global primitive equation solver is driven by time-independent forcing terms, derived from a training process, to produce a realisting eddying basic state with a tracer q trained to act like water vapor mixing ratio. Within this basic state, linearized anomaly moist physics in the column are applied in the form of a 20x20 matrix. The control matrix was derived from the results of Kuang (2010, 2012) who fitted a linear response function from a cloud resolving model in a state of deep convecting equilibrium. By editing this matrix in physical space and eigenspace, scaling and clipping its action, and optionally adding terms for processes that do not conserve moist statice energy (radiation, surface fluxes), we can decompose and explain the model's diverse moist process coupled variability. Recitified effects of this variability on the general circulation and climate, even in strictly zero-mean centered anomaly physic cases, also are sometimes surprising.

  9. Rational group decision making: A random field Ising model at T = 0

    NASA Astrophysics Data System (ADS)

    Galam, Serge

    1997-02-01

    A modified version of a finite random field Ising ferromagnetic model in an external magnetic field at zero temperature is presented to describe group decision making. Fields may have a non-zero average. A postulate of minimum inter-individual conflicts is assumed. Interactions then produce a group polarization along one very choice which is however randomly selected. A small external social pressure is shown to have a drastic effect on the polarization. Individual bias related to personal backgrounds, cultural values and past experiences are introduced via quenched local competing fields. They are shown to be instrumental in generating a larger spectrum of collective new choices beyond initial ones. In particular, compromise is found to results from the existence of individual competing bias. Conflict is shown to weaken group polarization. The model yields new psychosociological insights about consensus and compromise in groups.

  10. VARIABLE CHARGE SOILS: MINERALOGY AND CHEMISTRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Ranst, Eric; Qafoku, Nikolla; Noble, Andrew

    2016-09-19

    Soils rich in particles with amphoteric surface properties in the Oxisols, Ultisols, Alfisols, Spodosols and Andisols orders (1) are considered to be variable charge soils (2) (Table 1). The term “variable charge” is used to describe organic and inorganic soil constituents with reactive surface groups whose charge varies with pH and ionic concentration and composition of the soil solution. Such groups are the surface carboxyl, phenolic and amino functional groups of organic materials in soils, and surface hydroxyl groups of Fe and Al oxides, allophane and imogolite. The hydroxyl surface groups are also present on edges of some phyllosilicate mineralsmore » such as kaolinite, mica, and hydroxyl-interlayered vermiculite. The variable charge is developed on the surface groups as a result of adsorption or desorption of ions that are constituents of the solid phase, i.e., H+, and the adsorption or desorption of solid-unlike ions that are not constituents of the solid phase. Highly weathered soils and subsoils (e.g., Oxisols and some Ultisols, Alfisols and Andisols) may undergo isoelectric weathering and reach a “zero net charge” stage during their development. They usually have a slightly acidic to acidic soil solution pH, which is close to either the point of zero net charge (PZNC) (3) or the point of zero salt effect (PZSE) (3). They are characterized by high abundances of minerals with a point of zero net proton charge (PZNPC) (3) at neutral and slightly basic pHs; the most important being Fe and Al oxides and allophane. Under acidic conditions, the surfaces of these minerals are net positively charged. In contrast, the surfaces of permanent charge phyllosilicates are negatively charged regardless of ambient conditions. Variable charge soils therefore, are heterogeneous charge systems.« less

  11. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.

    PubMed

    Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun

    2015-09-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. The Safety Zone, 2000.

    ERIC Educational Resources Information Center

    Fiscus, James W., Ed.; Pollack, Ira, Ed.

    2000-01-01

    This publication is concerned with how to keep schools safe. The spring 2000 issue "Zero Tolerance: Effective Policy or Display of Administrative Machismo?" (James W. Fiscus) discusses how difficult it is to determine just what zero tolerance means and reminds readers that schools were required to pass zero tolerance rules to remain eligible for…

  13. Methods for determining the internal thrust of scramjet engine modules from experimental data

    NASA Technical Reports Server (NTRS)

    Voland, Randall T.

    1990-01-01

    Methods for calculating zero-fuel internal drag of scramjet engine modules from experimental measurements are presented. These methods include two control-volume approaches, and a pressure and skin-friction integration. The three calculation techniques are applied to experimental data taken during tests of a version of the NASA parametric scramjet. The methods agree to within seven percent of the mean value of zero-fuel internal drag even though several simplifying assumptions are made in the analysis. The mean zero-fuel internal drag coefficient for this particular engine is calculated to be 0.150. The zero-fuel internal drag coefficient when combined with the change in engine axial force with and without fuel defines the internal thrust of an engine.

  14. Divided dosing reduces prednisolone-induced hyperglycaemia and glycaemic variability: a randomized trial after kidney transplantation.

    PubMed

    Yates, Christopher J; Fourlanos, Spiros; Colman, Peter G; Cohney, Solomon J

    2014-03-01

    Prednisolone is a major risk factor for hyperglycaemia and new-onset diabetes after transplantation. Uncontrolled observational data suggest that divided dosing may reduce requirements for hypoglycaemic agents. This study aims to compare the glycaemic effects of divided twice daily (BD) and once daily (QD) prednisolone. Twenty-two kidney transplant recipients without diabetes were randomized to BD or QD prednisolone. Three weeks post-transplant, a continuous glucose monitor (iPro2(®) Medtronic) was applied for 5 days with subjects continuing their initial prednisolone regimen (Days 1-2) before crossover to the alternative regimen. Mean glucose, peak glucose, nadir glucose, exposure to hyperglycaemia (glucose ≥7.8 mmol/L) and glycaemic variability were assessed. The mean ± standard deviation (SD) age of subjects was 50 ± 10 years and 77% were male. Median (interquartile range) daily prednisolone dose was 25 (20, 25) mg. BD prednisolone was associated with decreased mean glucose (mean 7.9 ± 1.7 versus 8.1 ± 2.3 mmol/L, P < 0.001), peak glucose [median 10.4 (9.5, 11.4) versus 11.4 (10.3, 13.4) mmol/L, P< 0.001] and exposure to hyperglycaemia [median 25.5 (14.6, 30.3) versus 40.4 (33.2, 51.2) mmol/L/h, P = 0.003]. Median glucose peaked between 14:55-15.05 h with BD and 15:25-15:30 h with QD. Median glycaemic variability scores were decreased with BD: SD (1.1 versus 1.9, P < 0.001), mean amplitude of glycaemic excursion (1.5 versus 2.2, P = 0.001), continuous overlapping net glycaemic action-1 (CONGA-1; 1.0 versus 1.2, P = 0.039), CONGA-2 (1.2 versus 1.4, P = 0.008) and J-index (25 versus 31, P = 0.003). Split prednisolone dosing reduces glycaemic variability and hyperglycaemia early post-kidney transplant.

  15. Investigation of spectral analysis techniques for randomly sampled velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, Dave

    1993-01-01

    It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.

  16. Process Improvement at the Aircraft Intermediate Maintenance Detachment (AIMD) at Naval Air Station Whidbey Island

    DTIC Science & Technology

    2006-12-01

    the goal of achieving zero waste is impractical. Thus, the concept of Lean has to be slightly modified to adjust for the uncertainty and variability...personnel are qualified as Black or Green belts, this may become an issue for them down the road. 2. Criticism Two The goal of Lean is to achieve “ Zero ... Waste ,” therefore, how can the military achieve Lean in such a vast area of uncertainty and variability? Under the environment that DoD operates in

  17. Quasi-cylindrical theory of wing-body interference at supersonic speeds and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Nielsen, Jack N

    1955-01-01

    A theoretical method is presented for calculating the flow field about wing-body combinations employing bodies deviating only slightly in shape from a circular cylinder. The method is applied to the calculation of the pressure field acting between a circular cylindrical body and a rectangular wing. The case of zero body angle of attack and variable wing incidence is considered as well as the case of zero wing incidence and variable body angle of attack. An experiment was performed especially for the purpose of checking the calculative examples.

  18. Examining solutions to missing data in longitudinal nursing research.

    PubMed

    Roberts, Mary B; Sullivan, Mary C; Winchester, Suzy B

    2017-04-01

    Longitudinal studies are highly valuable in pediatrics because they provide useful data about developmental patterns of child health and behavior over time. When data are missing, the value of the research is impacted. The study's purpose was to (1) introduce a three-step approach to assess and address missing data and (2) illustrate this approach using categorical and continuous-level variables from a longitudinal study of premature infants. A three-step approach with simulations was followed to assess the amount and pattern of missing data and to determine the most appropriate imputation method for the missing data. Patterns of missingness were Missing Completely at Random, Missing at Random, and Not Missing at Random. Missing continuous-level data were imputed using mean replacement, stochastic regression, multiple imputation, and fully conditional specification (FCS). Missing categorical-level data were imputed using last value carried forward, hot-decking, stochastic regression, and FCS. Simulations were used to evaluate these imputation methods under different patterns of missingness at different levels of missing data. The rate of missingness was 16-23% for continuous variables and 1-28% for categorical variables. FCS imputation provided the least difference in mean and standard deviation estimates for continuous measures. FCS imputation was acceptable for categorical measures. Results obtained through simulation reinforced and confirmed these findings. Significant investments are made in the collection of longitudinal data. The prudent handling of missing data can protect these investments and potentially improve the scientific information contained in pediatric longitudinal studies. © 2017 Wiley Periodicals, Inc.

  19. Variability in Spatially and Temporally Resolved Emissions and Hydrocarbon Source Fingerprints for Oil and Gas Sources in Shale Gas Production Regions.

    PubMed

    Allen, David T; Cardoso-Saldaña, Felipe J; Kimura, Yosuke

    2017-10-17

    A gridded inventory for emissions of methane, ethane, propane, and butanes from oil and gas sources in the Barnett Shale production region has been developed. This inventory extends previous spatially resolved inventories of emissions by characterizing the overall variability in emission magnitudes and the composition of emissions at an hourly time resolution. The inventory is divided into continuous and intermittent emission sources. Sources are defined as continuous if hourly averaged emissions are greater than zero in every hour; otherwise, they are classified as intermittent. In the Barnett Shale, intermittent sources accounted for 14-30% of the mean emissions for methane and 10-34% for ethane, leading to spatial and temporal variability in the location of hourly emissions. The combined variability due to intermittent sources and variability in emission factors can lead to wide confidence intervals in the magnitude and composition of time and location-specific emission inventories; therefore, including temporal and spatial variability in emission inventories is important when reconciling inventories and observations. Comparisons of individual aircraft measurement flights conducted in the Barnett Shale region versus the estimated emission rates for each flight from the emission inventory indicate agreement within the expected variability of the emission inventory for all flights for methane and for all but one flight for ethane.

  20. Effect of chiral symmetry on chaotic scattering from Majorana zero modes.

    PubMed

    Schomerus, H; Marciani, M; Beenakker, C W J

    2015-04-24

    In many of the experimental systems that may host Majorana zero modes, a so-called chiral symmetry exists that protects overlapping zero modes from splitting up. This symmetry is operative in a superconducting nanowire that is narrower than the spin-orbit scattering length, and at the Dirac point of a superconductor-topological insulator heterostructure. Here we show that chiral symmetry strongly modifies the dynamical and spectral properties of a chaotic scatterer, even if it binds only a single zero mode. These properties are quantified by the Wigner-Smith time-delay matrix Q=-iℏS^{†}dS/dE, the Hermitian energy derivative of the scattering matrix, related to the density of states by ρ=(2πℏ)^{-1}TrQ. We compute the probability distribution of Q and ρ, dependent on the number ν of Majorana zero modes, in the chiral ensembles of random-matrix theory. Chiral symmetry is essential for a significant ν dependence.

  1. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  2. Quantum random bit generation using energy fluctuations in stimulated Raman scattering.

    PubMed

    Bustard, Philip J; England, Duncan G; Nunn, Josh; Moffatt, Doug; Spanner, Michael; Lausten, Rune; Sussman, Benjamin J

    2013-12-02

    Random number sequences are a critical resource in modern information processing systems, with applications in cryptography, numerical simulation, and data sampling. We introduce a quantum random number generator based on the measurement of pulse energy quantum fluctuations in Stokes light generated by spontaneously-initiated stimulated Raman scattering. Bright Stokes pulse energy fluctuations up to five times the mean energy are measured with fast photodiodes and converted to unbiased random binary strings. Since the pulse energy is a continuous variable, multiple bits can be extracted from a single measurement. Our approach can be generalized to a wide range of Raman active materials; here we demonstrate a prototype using the optical phonon line in bulk diamond.

  3. A meta-analytic review of the relationship between family accommodation and OCD symptom severity.

    PubMed

    Strauss, Clara; Hale, Lucy; Stobie, Blake

    2015-06-01

    Accommodation of obsessive compulsive disorder (OCD) symptoms by family members is common. This paper presents a systematic meta-analytic review on family accommodation and OCD symptom severity. Fourteen studies investigating the relationship between family accommodation and OCD symptoms were selected. The medium effect size of the relationship between family accommodation and OCD symptom severity was significant (r = .35; 95% CI: .23 to .47), based on a Hunter-Schmidt random effects model with a total of 849 participants. Although there was some evidence of publication bias, Rosenthal's fail-safe N suggested that 596 studies with zero effect would be needed to reduce the mean effect size to non-significant. Findings are discussed in the context of the limitations of the studies, and in particular the reliance on cross-sectional designs which impede causal conclusions. Future research to evaluate a family accommodation intervention in a randomized controlled design and using mediation analysis to explore change mechanisms is called for. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. EM Adaptive LASSO—A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes

    PubMed Central

    Mallick, Himel; Tiwari, Hemant K.

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062

  5. EM Adaptive LASSO-A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes.

    PubMed

    Mallick, Himel; Tiwari, Hemant K

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.

  6. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    PubMed

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  7. Identification of Genes Involved in Breast Cancer Metastasis by Integrating Protein-Protein Interaction Information with Expression Data.

    PubMed

    Tian, Xin; Xin, Mingyuan; Luo, Jian; Liu, Mingyao; Jiang, Zhenran

    2017-02-01

    The selection of relevant genes for breast cancer metastasis is critical for the treatment and prognosis of cancer patients. Although much effort has been devoted to the gene selection procedures by use of different statistical analysis methods or computational techniques, the interpretation of the variables in the resulting survival models has been limited so far. This article proposes a new Random Forest (RF)-based algorithm to identify important variables highly related with breast cancer metastasis, which is based on the important scores of two variable selection algorithms, including the mean decrease Gini (MDG) criteria of Random Forest and the GeneRank algorithm with protein-protein interaction (PPI) information. The new gene selection algorithm can be called PPIRF. The improved prediction accuracy fully illustrated the reliability and high interpretability of gene list selected by the PPIRF approach.

  8. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  9. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers' Adaptations.

    PubMed

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.

  10. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers’ Adaptations

    PubMed Central

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435

  11. Randomized clinical trial in healthy individuals on the effect of viscous fiber blend on glucose tolerance when incorporated in capsules or into the carbohydrate or fat component of the meal.

    PubMed

    Jenkins, Alexandra L; Morgan, Linda M; Bishop, Jacqueline; Jovanovski, Elena; Vuksan, Vladimir

    2014-01-01

    Addition of viscous fiber to foods has been shown to significantly reduce postprandial glucose excursions. However, palatability issues and the variability in effectiveness due to different methods of administration in food limits it use. This study explores the effectiveness of a viscous fiber blend (VFB) in lowering postprandial glycemia using different methods of incorporation. Two acute, randomized, controlled studies were undertaken: Study 1: Twelve healthy individuals (mean ± SD, age: 36 ± 13 years, body mass index [BMI]: 27 ± 4 kg/m(2)) consumed 8 different breakfasts. All meals consisted of 50 g of available carbohydrate from white bread (WB) and 10 g margarine. Zero, 1, 2, or 4 g of the VFB was baked into WB or mixed with the margarine. Study 2: Thirteen healthy individuals (mean ± SD, age: 39 ± 17 years, BMI: 25 ± 5 kg/m(2)) consumed 6 test meals, consisting of 50 g of available carbohydrate from WB. Six capsules containing either cornstarch or VFB were taken at 4 different time points during the glucose tolerance test. After obtaining a fasting finger-prick blood sample, volunteers consumed the test meal over a 10-minute period. Additional blood samples were taken at 15, 30, 45, 60, 90, and 120 minutes from the start of the meal. For study 2, an additional fasting sample was obtained at -30 minutes. Study 1: Irrespective of VFB dose, glucose levels were lower at 30 and 45 minutes when VFB was mixed into the margarine compared to the control (p < 0.05). Incremental areas under the curve were significantly lower compared to control when 4 g of VFB was mixed into the margarine. Study 2: There was no effect of the VFB on postprandial glucose levels when administered in capsules. Incorporation of VFB into margarine was more effective in lowering postprandial glycemia than when the VFB was baked into bread and no effect when given in capsules.

  12. Controlled assessment of the efficacy of occlusal stabilization splints on sleep bruxism.

    PubMed

    van der Zaag, Jacques; Lobbezoo, Frank; Wicks, Darrel J; Visscher, Corine M; Hamburger, Hans L; Naeije, Machiel

    2005-01-01

    To assess the efficacy of occlusal stabilization splints in the management of sleep bruxism (SB) in a double-blind, parallel, controlled, randomized clinical trial. Twenty-one participants were randomly assigned to an occlusal splint group (n = 11; mean age = 34.2 +/- 13.1 years) or a palatal splint (ie, an acrylic palatal coverage) group (n = 10; mean age = 34.9 +/- 11.2 years). Two polysomnographic recordings that included bilateral masseter electromyographic activity were made: one prior to treatment, the other after a treatment period of 4 weeks. The number of bruxism episodes per hour of sleep (Epi/h), the number of bursts per hour (Bur/h), and the bruxism time index (ie, the percentage of total sleep time spent bruxing) were established as outcome variables at a 10% maximum voluntary contraction threshold level. A general linear model was used to test both the effects between splint groups and within the treatment phase as well as their interaction for each outcome variable. Neither occlusal stabilization splints nor palatal splints had an influence on the SB outcome variables or on the sleep variables measured on a group level. In individual cases, variable outcomes were found: Some patients had an increase (33% to 48% of the cases), while others showed no change (33% to 48%) or a decrease (19% to 29%) in SB outcome variables. The absence of significant group effects of splints in the management of SB indicates that caution is required when splints are indicated, apart from their role in the protection against dental wear. The application of splints should therefore be considered at the individual patient level.

  13. A statistical approach for evaluating the effectiveness of heartworm preventive drugs: what does 100% efficacy really mean?

    PubMed

    Vidyashankar, Anand N; Jimenez Castro, Pablo D; Kaplan, Ray M

    2017-11-09

    Initial studies of heartworm preventive drugs all yielded an observed efficacy of 100% with a single dose, and based on these data the US Food and Drug Administration (FDA) required all products to meet this standard for approval. Those initial studies, however, were based on just a few strains of parasites, and therefore were not representative of the full assortment of circulating biotypes. This issue has come to light in recent years, where it has become common for studies to yield less than 100% efficacy. This has changed the landscape for the testing of new products because heartworm efficacy studies lack the statistical power to conclude that finding zero worms is different from finding a few worms. To address this issue, we developed a novel statistical model, based on a hierarchical modeling and parametric bootstrap approach that provides new insights to assess multiple sources of variability encountered in heartworm drug efficacy studies. Using the newly established metrics we performed both data simulations and analyzed actual experimental data. Our results suggest that an important source of modeling variability arises from variability in the parasite establishment rate between dogs; not accounting for this can overestimate the efficacy in more than 40% of cases. We provide strong evidence that ZoeMo-2012 and JYD-34, which both were established from the same source dog, have differing levels of susceptibility to moxidectin. In addition, we provide strong evidence that the differences in efficacy seen in two published studies using the MP3 strain were not due to randomness, and thus must be biological in nature. Our results demonstrate how statistical modeling can improve the interpretation of data from heartworm efficacy studies by providing a means to identify the true efficacy range based on the observed data. Importantly, these new insights should help to inform regulators on how to move forward in establishing new statistically and scientifically valid requirements for efficacy in the registration of new heartworm preventative products. Furthermore, our results provide strong evidence that heartworm 'strains' can change their susceptibility phenotype over short periods of time, providing further evidence that a wide diversity of susceptibility phenotypes exists among naturally circulating biotypes of D. immitis.

  14. Stochastic analysis of uncertain thermal parameters for random thermal regime of frozen soil around a single freezing pipe

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei

    2018-03-01

    The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.

  15. A statistical methodology for estimating transport parameters: Theory and applications to one-dimensional advectivec-dispersive systems

    USGS Publications Warehouse

    Wagner, Brian J.; Gorelick, Steven M.

    1986-01-01

    A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.

  16. Applying the zero-inflated Poisson model with random effects to detect abnormal rises in school absenteeism indicating infectious diseases outbreak.

    PubMed

    Song, X X; Zhao, Q; Tao, T; Zhou, C M; Diwan, V K; Xu, B

    2018-05-30

    Records of absenteeism from primary schools are valuable data for infectious diseases surveillance. However, the analysis of the absenteeism is complicated by the data features of clustering at zero, non-independence and overdispersion. This study aimed to generate an appropriate model to handle the absenteeism data collected in a European Commission granted project for infectious disease surveillance in rural China and to evaluate the validity and timeliness of the resulting model for early warnings of infectious disease outbreak. Four steps were taken: (1) building a 'well-fitting' model by the zero-inflated Poisson model with random effects (ZIP-RE) using the absenteeism data from the first implementation year; (2) applying the resulting model to predict the 'expected' number of absenteeism events in the second implementation year; (3) computing the differences between the observations and the expected values (O-E values) to generate an alternative series of data; (4) evaluating the early warning validity and timeliness of the observational data and model-based O-E values via the EARS-3C algorithms with regard to the detection of real cluster events. The results indicate that ZIP-RE and its corresponding O-E values could improve the detection of aberrations, reduce the false-positive signals and are applicable to the zero-inflated data.

  17. Effect of music therapy on the anxiety levels and pregnancy rate of women undergoing in vitro fertilization-embryo transfer: A randomized controlled trial.

    PubMed

    Aba, Yilda Arzu; Avci, Dilek; Guzel, Yilmaz; Ozcelik, Semanur Kumral; Gurtekin, Basak

    2017-08-01

    The aim of this study was to determine the effect of music therapy on the anxiety levels and pregnancy rates of women who underwent in vitro fertilization-embryo transfer. This prospective randomized controlled trial was conducted with 186 infertile women who presented to the In Vitro Fertilization Unit at the American Hospital in Turkey between April 2015 and April 2016. The infertile women who met the inclusion criteria were assigned to the music therapy group or the standard therapy group through block randomization. The study data were collected using the Personal Information Form, and State-Trait Anxiety Inventory. Early treatment success was determined by serum beta human chorionic gonadotrophin levels seven or ten days after the luteal day zero. For the analysis, descriptive statistics, chi-square test, Fisher's exact test, independent sample t-test were used. After the embryo transfer, the mean state anxiety scores decreased in both groups, and the mean trait anxiety score decreased in the music therapy group; however, the difference was not statistically significant (p>0.05). Clinical pregnancy rates did not differ between the music (48.3%) and standard (46.4%) therapy groups. After the two sessions of music therapy, state and trait anxiety levels decreased and pregnancy rates increased, but the difference was not significant. Therefore, larger sample sizes and more sessions are needed to evaluate whether music therapy has an effect on clinical outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  19. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  20. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  1. Lean, Mean and Green: An Affordable Net Zero School

    ERIC Educational Resources Information Center

    Stanfield, Kenneth

    2010-01-01

    From its conception, Richardsville Elementary was designed to be an affordable net zero facility. The design team explored numerous energy saving strategies to dramatically reduce energy consumption. By reducing energy use to 19.31 kBtus annually, the net zero goal could be realized through the implementation of a solar array capable of producing…

  2. 49 CFR 571.226 - Standard No. 226; Ejection Mitigation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to enter the passenger compartment area in an up-right position. Zero displacement plane means, a... headform must not displace more than 100 millimeters beyond the zero displacement plane. S4.2.1.1No vehicle... these procedures, target locations are identified (S5.2) and the zero displacement plane location is...

  3. 49 CFR 571.226 - Standard No. 226; Ejection Mitigation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to enter the passenger compartment area in an up-right position. Zero displacement plane means, a... headform must not displace more than 100 millimeters beyond the zero displacement plane. S4.2.1.1No vehicle... these procedures, target locations are identified (S5.2) and the zero displacement plane location is...

  4. Validation of zero-order feedback strategies for medium range air-to-air interception in a horizontal plane

    NASA Technical Reports Server (NTRS)

    Shinar, J.

    1982-01-01

    A zero order feedback solution of a variable speed interception game between two aircraft in the horizontal plane, obtained by using the method of forced singular perturbation (FSP), is compared with the exact open loop solution. The comparison indicates that for initial distances of separation larger than eight turning radii of the evader, the accuracy of the feedback approximation is better than one percent. The result validates the zero order FSP approximation for medium range air combat analysis.

  5. Magnetic zero-modes, vortices and Cartan geometry

    NASA Astrophysics Data System (ADS)

    Ross, Calum; Schroers, Bernd J.

    2018-04-01

    We exhibit a close relation between vortex configurations on the 2-sphere and magnetic zero-modes of the Dirac operator on R^3 which obey an additional nonlinear equation. We show that both are best understood in terms of the geometry induced on the 3-sphere via pull-back of the round geometry with bundle maps of the Hopf fibration. We use this viewpoint to deduce a manifestly smooth formula for square-integrable magnetic zero-modes in terms of two homogeneous polynomials in two complex variables.

  6. Characterization and evaluation of controls on post-fire streamflow response across western US watersheds

    NASA Astrophysics Data System (ADS)

    Saxe, Samuel; Hogue, Terri S.; Hay, Lauren

    2018-02-01

    This research investigates the impact of wildfires on watershed flow regimes, specifically focusing on evaluation of fire events within specified hydroclimatic regions in the western United States, and evaluating the impact of climate and geophysical variables on response. Eighty-two watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. Percent change in annual runoff ratio, low flows, high flows, peak flows, number of zero flow days, baseflow index, and Richards-Baker flashiness index were calculated for each watershed using pre- and post-fire periods. Independent variables were identified for each watershed and fire event, including topographic, vegetation, climate, burn severity, percent area burned, and soils data. Results show that low flows, high flows, and peak flows increase in the first 2 years following a wildfire and decrease over time. Relative response was used to scale response variables with the respective percent area of watershed burned in order to compare regional differences in watershed response. To account for variability in precipitation events, runoff ratio was used to compare runoff directly to PRISM precipitation estimates. To account for regional differences in climate patterns, watersheds were divided into nine regions, or clusters, through k-means clustering using climate data, and regression models were produced for watersheds grouped by total area burned. Watersheds in Cluster 9 (eastern California, western Nevada, Oregon) demonstrate a small negative response to observed flow regimes after fire. Cluster 8 watersheds (coastal California) display the greatest flow responses, typically within the first year following wildfire. Most other watersheds show a positive mean relative response. In addition, simple regression models show low correlation between percent watershed burned and streamflow response, implying that other watershed factors strongly influence response. Spearman correlation identified NDVI, aridity index, percent of a watershed's precipitation that falls as rain, and slope as being positively correlated with post-fire streamflow response. This metric also suggested a negative correlation between response and the soil erodibility factor, watershed area, and percent low burn severity. Regression models identified only moderate burn severity and watershed area as being consistently positively/negatively correlated, respectively, with response. The random forest model identified only slope and percent area burned as significant watershed parameters controlling response. Results will help inform post-fire runoff management decisions by helping to identify expected changes to flow regimes, as well as facilitate parameterization for model application in burned watersheds.

  7. Percolation Laws of a Fractal Fracture-Pore Double Medium

    NASA Astrophysics Data System (ADS)

    Zhao, Yangsheng; Feng, Zengchao; Lv, Zhaoxing; Zhao, Dong; Liang, Weiguo

    2016-12-01

    The fracture-pore double porosity medium is one of the most common media in nature, for example, rock mass in strata. Fracture has a more significant effect on fluid flow than a pore in a fracture-pore double porosity medium. Hence, the fracture effect on percolation should be considered when studying the percolation phenomenon in porous media. In this paper, based on the fractal distribution law, three-dimensional (3D) fracture surfaces, and two-dimensional (2D) fracture traces in rock mass, the locations of fracture surfaces or traces are determined using a random function of uniform distribution. Pores are superimposed to build a fractal fracture-pore double medium. Numerical experiments were performed to show percolation phenomena in the fracture-pore double medium. The percolation threshold can be determined from three independent variables (porosity n, fracture fractal dimension D, and initial value of fracture number N0). Once any two are determined, the percolation probability exists at a critical point with the remaining parameter changing. When the initial value of the fracture number is greater than zero, the percolation threshold in the fracture-pore medium is much smaller than that in a pore medium. When the fracture number equals zero, the fracture-pore medium degenerates to a pore medium, and both percolation thresholds are the same.

  8. Hardware-in-the-loop grid simulator system and method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos

    A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises anmore » improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.« less

  9. Modeling number of claims and prediction of total claim amount

    NASA Astrophysics Data System (ADS)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  10. Divergence instability of pipes conveying fluid with uncertain flow velocity

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi; Mirdamadi, Hamid Reza; Goli, Sareh

    2018-02-01

    This article deals with investigation of probabilistic stability of pipes conveying fluid with stochastic flow velocity in time domain. As a matter of fact, this study has focused on the randomness effects of flow velocity on stability of pipes conveying fluid while most of research efforts have only focused on the influences of deterministic parameters on the system stability. The Euler-Bernoulli beam and plug flow theory are employed to model pipe structure and internal flow, respectively. In addition, flow velocity is considered as a stationary random process with Gaussian distribution. Afterwards, the stochastic averaging method and Routh's stability criterion are used so as to investigate the stability conditions of system. Consequently, the effects of boundary conditions, viscoelastic damping, mass ratio, and elastic foundation on the stability regions are discussed. Results delineate that the critical mean flow velocity decreases by increasing power spectral density (PSD) of the random velocity. Moreover, by increasing PSD from zero, the type effects of boundary condition and presence of elastic foundation are diminished, while the influences of viscoelastic damping and mass ratio could increase. Finally, to have a more applicable study, regression analysis is utilized to develop design equations and facilitate further analyses for design purposes.

  11. Sufficient condition for finite-time singularity and tendency towards self-similarity in a high-symmetry flow

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Bhattacharjee, A.

    A highly symmetric Euler flow, first proposed by Kida (1985), and recently simulated by Boratav and Pelz (1994) is considered. It is found that the fourth order spatial derivative of the pressure (pxxxx) at the origin is most probably positive. It is demonstrated that if pxxxx grows fast enough, there must be a finite-time singularity (FTS). For a random energy spectrum E(k) ∞ k-v, a FTS can occur if the spectral index v<3. Furthermore, a positive pxxxx has the dynamical consequence of reducing the third derivative of the velocity uxxx at the origin. Since the expectation value of uxxx is zero for a random distribution of energy, an ever decreasing uxxx means that the Kida flow has an intrinsic tendency to deviate from a random state. By assuming that uxxx reaches the minimum value for a given spectral profile, the velocity and pressure are found to have locally self-similar forms similar in shape to what are found in numerical simulations. Such a quasi self-similar solution relaxes the requirement for FTS to v<6. A special self-similar solution that satisfies Kelvin's circulation theorem and exhibits a FTS is found for v=2.

  12. δ-exceedance records and random adaptive walks

    NASA Astrophysics Data System (ADS)

    Park, Su-Chan; Krug, Joachim

    2016-08-01

    We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.

  13. Nonlinear system guidance in the presence of transmission zero dynamics

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Hunt, L. R.; Su, R.

    1995-01-01

    An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.

  14. Large Deviations: Advanced Probability for Undergrads

    ERIC Educational Resources Information Center

    Rolls, David A.

    2007-01-01

    In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…

  15. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  16. Net Influence of an Internally Generated Guasi-biennial Oscillation on Modelled Stratospheric Climate and Chemistry

    NASA Technical Reports Server (NTRS)

    Hurwitz, Margaret M.; Oman, Luke David; Newman, Paul A.; Song, InSun

    2013-01-01

    A Goddard Earth Observing System Chemistry- Climate Model (GEOSCCM) simulation with strong tropical non-orographic gravity wave drag (GWD) is compared to an otherwise identical simulation with near-zero tropical non-orographic GWD. The GEOSCCM generates a quasibiennial oscillation (QBO) zonal wind signal in response to a tropical peak in GWD that resembles the zonal and climatological mean precipitation field. The modelled QBO has a frequency and amplitude that closely resembles observations. As expected, the modelled QBO improves the simulation of tropical zonal winds and enhances tropical and subtropical stratospheric variability. Also, inclusion of the QBO slows the meridional overturning circulation, resulting in a generally older stratospheric mean age of air. Slowing of the overturning circulation, changes in stratospheric temperature and enhanced subtropical mixing all affect the annual mean distributions of ozone, methane and nitrous oxide. Furthermore, the modelled QBO enhances polar stratospheric variability in winter. Because tropical zonal winds are easterly in the simulation without a QBO, there is a relative increase in tropical zonal winds in the simulation with a QBO. Extratropical differences between the simulations with and without a QBO thus reflect the westerly shift in tropical zonal winds: a relative strengthening of the polar stratospheric jet, polar stratospheric cooling and a weak reduction in Arctic lower stratospheric ozone.

  17. Non-fixation for Conservative Stochastic Dynamics on the Line

    NASA Astrophysics Data System (ADS)

    Basu, Riddhipratim; Ganguly, Shirshendu; Hoffman, Christopher

    2018-03-01

    We consider activated random walk (ARW), a model which generalizes the stochastic sandpile, one of the canonical examples of self organized criticality. Informally ARW is a particle system on Z with mass conservation. One starts with a mass density {μ > 0} of initially active particles, each of which performs a symmetric random walk at rate one and falls asleep at rate {λ > 0}. Sleepy particles become active on coming in contact with other active particles. We investigate the question of fixation/non-fixation of the process and show for small enough {λ} the critical mass density for fixation is strictly less than one. Moreover, the critical density goes to zero as {λ} tends to zero. This settles a long standing open question.

  18. Effect of a School-Based Supervised Tooth Brushing Program In Mexico City: A Cluster Randomized Intervention.

    PubMed

    Borges-Yáñez, S Aída; Castrejón-Pérez, Roberto Carlos; Camacho, María Esther Irigoyen

    Large-scale school-based programs effectively provide health education and preventive strategies. SaludARTE is a school-based program, including supervised tooth brushing, implemented in 51 elementary schools in Mexico City. To assess the three-month efficacy of supervised tooth brushing in reducing dental plaque, gingival inflammation, and bleeding on probing in schoolchildren participating in SaludARTE. This was a pragmatic cluster randomized intervention, with two parallel branches. Four randomly selected schools participating in SaludARTE (n=200) and one control school, which did not participate in the program (CG) (n=50), were assessed. Clusters were not randomly allocated to intervention. The main outcomes were as follows: mean percentage gingival units with no inflammation, dental surfaces with no dental plaque, and gingival margins with no bleeding. The independent variable was supervised tooth brushing at school once a day after a meal. Guardians and children responded to a questionnaire on sociodemographic and oral hygiene practices, and children were examined dentally. Mean percentage differences were compared (baseline and follow-up). A total of 75% of guardians from the intervention group (IG) and 77% from the CG answered the questionnaire. Of these, 89.3% were women, with a mean age of 36.9±8.5 years. No differences in sociodemographic variables were observed between groups, and 151 children from the IG and 35 from the CG were examined at baseline and follow-up. Mean percentage differences for plaque-free surfaces (8.8±28.5%) and healthy gingival units (23.3%±23.2%) were significantly higher in the IG. The school-supervised tooth brushing program is effective in improving oral hygiene and had a greater impact on plaque and gingivitis than on gingival bleeding. It is necessary to reinforce the oral health education component of the program.

  19. Design of Probabilistic Random Forests with Applications to Anticancer Drug Sensitivity Prediction

    PubMed Central

    Rahman, Raziur; Haider, Saad; Ghosh, Souparno; Pal, Ranadip

    2015-01-01

    Random forests consisting of an ensemble of regression trees with equal weights are frequently used for design of predictive models. In this article, we consider an extension of the methodology by representing the regression trees in the form of probabilistic trees and analyzing the nature of heteroscedasticity. The probabilistic tree representation allows for analytical computation of confidence intervals (CIs), and the tree weight optimization is expected to provide stricter CIs with comparable performance in mean error. We approached the ensemble of probabilistic trees’ prediction from the perspectives of a mixture distribution and as a weighted sum of correlated random variables. We applied our methodology to the drug sensitivity prediction problem on synthetic and cancer cell line encyclopedia dataset and illustrated that tree weights can be selected to reduce the average length of the CI without increase in mean error. PMID:27081304

  20. Experimental study on all-fiber-based unidimensional continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Xuyang; Liu, Wenyuan; Wang, Pu; Li, Yongmin

    2017-06-01

    We experimentally demonstrated an all-fiber-based unidimensional continuous-variable quantum key distribution (CV QKD) protocol and analyzed its security under collective attack in realistic conditions. A pulsed balanced homodyne detector, which could not be accessed by eavesdroppers, with phase-insensitive efficiency and electronic noise, was considered. Furthermore, a modulation method and an improved relative phase-locking technique with one amplitude modulator and one phase modulator were designed. The relative phase could be locked precisely with a standard deviation of 0.5° and a mean of almost zero. Secret key bit rates of 5.4 kbps and 700 bps were achieved for transmission fiber lengths of 30 and 50 km, respectively. The protocol, which simplified the CV QKD system and reduced the cost, displayed a performance comparable to that of a symmetrical counterpart under realistic conditions. It is expected that the developed protocol can facilitate the practical application of the CV QKD.

  1. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  2. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  3. Galaxy formation

    PubMed Central

    Peebles, P. J. E.

    1998-01-01

    It is argued that within the standard Big Bang cosmological model the bulk of the mass of the luminous parts of the large galaxies likely had been assembled by redshift z ∼ 10. Galaxy assembly this early would be difficult to fit in the widely discussed adiabatic cold dark matter model for structure formation, but it could agree with an isocurvature version in which the cold dark matter is the remnant of a massive scalar field frozen (or squeezed) from quantum fluctuations during inflation. The squeezed field fluctuations would be Gaussian with zero mean, and the distribution of the field mass therefore would be the square of a random Gaussian process. This offers a possibly interesting new direction for the numerical exploration of models for cosmic structure formation. PMID:9419326

  4. Continuous-variable phase estimation with unitary and random linear disturbance

    NASA Astrophysics Data System (ADS)

    Delgado de Souza, Douglas; Genoni, Marco G.; Kim, M. S.

    2014-10-01

    We address the problem of continuous-variable quantum phase estimation in the presence of linear disturbance at the Hamiltonian level by means of Gaussian probe states. In particular we discuss both unitary and random disturbance by considering the parameter which characterizes the unwanted linear term present in the Hamiltonian as fixed (unitary disturbance) or random with a given probability distribution (random disturbance). We derive the optimal input Gaussian states at fixed energy, maximizing the quantum Fisher information over the squeezing angle and the squeezing energy fraction, and we discuss the scaling of the quantum Fisher information in terms of the output number of photons, nout. We observe that, in the case of unitary disturbance, the optimal state is a squeezed vacuum state and the quadratic scaling is conserved. As regards the random disturbance, we observe that the optimal squeezing fraction may not be equal to one and, for any nonzero value of the noise parameter, the quantum Fisher information scales linearly with the average number of photons. Finally, we discuss the performance of homodyne measurement by comparing the achievable precision with the ultimate limit imposed by the quantum Cramér-Rao bound.

  5. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    NASA Astrophysics Data System (ADS)

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0,1,1) model can be interpreted to be consisting of random walk in a noisy environment (Box and Jenkins, 1976). The fitted model appears to be weakly non-stationary, that gives us the possibility to use stationary approximation if only the noise component from that sum of white noise and random walk is exploited. We get a convenient routine to generate a stationary precipitation climatology with a reasonable accuracy, since the noise component variance is much larger than the dispersion of the random walk generator. This interpretation emphasizes dominating role of a random component in the precipitation series. The result is understandable due to a small territory of Estonia that is situated in the mid-latitude cyclone track. References Box, J.E.P. and G. Jenkins 1976: Time Series Analysis, Forecasting and Control (revised edn.), Holden Day San Francisco, CA, 575 pp. Davis, A., Marshak, A., Wiscombe, W. and R. Cahalan 1996: Multifractal characterizations of intermittency in nonstationary geophysical signals and fields.in G. Trevino et al. (eds) Current Topics in Nonsstationarity Analysis. World-Scientific, Singapore, 97-158. Kärner, O. 2002: On nonstationarity and antipersistency in global temperature series. J. Geophys. Res. D107; doi:10.1029/2001JD002024. Kärner, O. 2005: Some examples on negative feedback in the Earth climate system. Centr. European J. Phys. 3; 190-208. Monin, A.S. and A.M. Yaglom 1975: Statistical Fluid Mechanics, Vol 2. Mechanics of Turbulence , MIT Press Boston Mass, 886 pp.

  6. A randomized, controlled clinical trial of four anti-dandruff shampoos.

    PubMed

    Rapaport, M

    1981-01-01

    A total of 199 patients were selected for a comparison of the anti-dandruff efficacy of four shampoos. After a 2-week lead in (all patients used only Johnson's Baby Shampoo twice weekly) the patients were randomly assigned to Selsun Blue, Head & Shoulders, Flex, or Tegrin. The test preparation, which was unknown to the observer, was used twice weekly for 4 weeks. Loose and adherent dandruff were each rated on a scale of 0 to 20 (absent to severe) at the end of the lead-in (when a total score of 15 was required), and each week of study. The mean total pre-study score for all subjects was 19.5. At the end of the study the mean improvement scores were: 16.2 (Selsun Blue), 14.6 (head & Shoulders), 13.5 (Flex), and 13.1 (Tegrin). The improvement was significantly greater (p less than 0.05) on Selsun Blue than on any of the other shampoo. At the end of the study significantly (p less than 0.05) more patients had total scores of zero on Selsun Blue (15) than on Tegrin or Head & Shoulders. In addition, the rate of improvement was significantly (p less than 0.05) faster with Selsun Blue than with any of the other test preparations.

  7. Time-variant Lagrangian transport formulation reduces aggregation bias of water and solute mean travel time in heterogeneous catchments

    NASA Astrophysics Data System (ADS)

    Danesh-Yazdi, Mohammad; Botter, Gianluca; Foufoula-Georgiou, Efi

    2017-05-01

    Lack of hydro-bio-chemical data at subcatchment scales necessitates adopting an aggregated system approach for estimating water and solute transport properties, such as residence and travel time distributions, at the catchment scale. In this work, we show that within-catchment spatial heterogeneity, as expressed in spatially variable discharge-storage relationships, can be appropriately encapsulated within a lumped time-varying stochastic Lagrangian formulation of transport. This time (variability) for space (heterogeneity) substitution yields mean travel times (MTTs) that are not significantly biased to the aggregation of spatial heterogeneity. Despite the significant variability of MTT at small spatial scales, there exists a characteristic scale above which the MTT is not impacted by the aggregation of spatial heterogeneity. Extensive simulations of randomly generated river networks reveal that the ratio between the characteristic scale and the mean incremental area is on average independent of river network topology and the spatial arrangement of incremental areas.

  8. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.

    PubMed

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2013-02-01

    In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.

  9. THE EFFECT OF HORMONE THERAPY ON MEAN BLOOD PRESSURE AND VISIT-TO-VISIT BLOOD PRESSURE VARIABILITY IN POSTMENOPAUSAL WOMEN: RESULTS FROM THE WOMEN’S HEALTH INITIATIVE RANDOMIZED CONTROLLED TRIALS

    PubMed Central

    Shimbo, Daichi; Wang, Lu; Lamonte, Michael J.; Allison, Matthew; Wellenius, Gregory A.; Bavry, Anthony A.; Martin, Lisa W.; Aragaki, Aaron; Newman, Jonathan D.; Swica, Yael; Rossouw, Jacques E.; Manson, JoAnn E.; Wassertheil-Smoller, Sylvia

    2014-01-01

    Objectives Mean and visit-to-visit variability (VVV) of blood pressure are associated with an increased cardiovascular disease risk. We examined the effect of hormone therapy on mean and VVV of blood pressure in postmenopausal women from the Women’s Health Initiative (WHI) randomized controlled trials. Methods Blood pressure was measured at baseline and annually in the two WHI hormone therapy trials in which 10,739 and 16,608 postmenopausal women were randomized to conjugated equine estrogens (CEE, 0.625 mg/day) or placebo, and CEE plus medroxyprogesterone acetate (MPA, 2.5 mg/day) or placebo, respectively. Results At the first annual visit (Year 1), mean systolic blood pressure was 1.04 mmHg (95% CI 0.58, 1.50) and 1.35 mmHg (95% CI 0.99, 1.72) higher in the CEE and CEE+MPA arms respectively compared to corresponding placebos. These effects remained stable after Year 1. CEE also increased VVV of systolic blood pressure (ratio of VVV in CEE vs. placebo, 1.03, P<0.001), whereas CEE+MPA did not (ratio of VVV in CEE+MPA vs. placebo, 1.01, P=0.20). After accounting for study drug adherence, the effects of CEE and CEE+MPA on mean systolic blood pressure increased at Year 1, and the differences in the CEE and CEE+MPA arms vs. placebos also continued to increase after Year 1. Further, both CEE and CEE+MPA significantly increased VVV of systolic blood pressure (ratio of VVV in CEE vs. placebo, 1.04, P<0.001; ratio of VVV in CEE+MPA vs. placebo, 1.05, P<0.001). Conclusions Among postmenopausal women, CEE and CEE+MPA at conventional doses increased mean and VVV of systolic blood pressure. PMID:24991872

  10. Effect of Vitamin E on Oxaliplatin-induced Peripheral Neuropathy Prevention: A Randomized Controlled Trial

    PubMed Central

    Salehi, Zeinab; Roayaei, Mahnaz

    2015-01-01

    Background: Peripheral neuropathy is one of the most important limitations of oxaliplatin base regimen, which is the standard for the treatment of colorectal cancer. Evidence has shown that Vitamin E may be protective in chemotherapy-induced peripheral neuropathy. The aim of this study is to evaluate the effect of Vitamin E administration on prevention of oxaliplatin-induced peripheral neuropathy in patients with colorectal cancer. Methods: This was a prospective randomized, controlled clinical trial. Patients with colorectal cancer and scheduled to receive oxaliplatin-based regimens were enrolled in this study. Enrolled patients were randomized into two groups. The first group received Vitamin E at a dose of 400 mg daily and the second group observed, until after the sixth course of the oxaliplatin regimen. For oxaliplatin-induced peripheral neuropathy assessment, we used the symptom experience diary questionnaire that completed at baseline and after the sixth course of chemotherapy. Only patients with a score of zero at baseline were eligible for this study. Results: Thirty-two patients were randomized to the Vitamin E group and 33 to the control group. There was no difference in the mean peripheral neuropathy score changes (after − before) between two groups, after sixth course of the oxaliplatin base regimen (mean difference [after − before] of Vitamin E group = 6.37 ± 2.85, control group = 6.57 ± 2.94; P = 0.78). Peripheral neuropathy scores were significantly increased after intervention compared with a base line in each group (P < 0.001). Conclusions: The results from this current trial demonstrate a lack of benefit for Vitamin E in preventing oxaliplatin-induced peripheral neuropathy. PMID:26682028

  11. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  12. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Bandi, Mahesh M.; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig’s XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  13. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence.

    PubMed

    Bandi, Mahesh M; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig's XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  14. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  15. Random Matrix Theory and Elliptic Curves

    DTIC Science & Technology

    2014-11-24

    distribution is unlimited. 1 ELLIPTIC CURVES AND THEIR L-FUNCTIONS 2 points on that curve. Counting rational points on curves is a field with a rich ...deficiency of zeros near the origin of the histograms in Figure 1. While as d becomes large this discretization becomes smaller and has less and less effect...order of 30), the regular oscillations seen at the origin become dominated by fluctuations of an arithmetic origin, influenced by zeros of the Riemann

  16. Staggered chiral random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, James C.

    2011-02-01

    We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.

  17. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  18. Statistical auditing and randomness test of lotto k/N-type games

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.

    2008-11-01

    One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.

  19. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  20. Cost-utility analysis of stenting versus endarterectomy in the International Carotid Stenting Study.

    PubMed

    Morris, Stephen; Patel, Nishma V; Dobson, Joanna; Featherstone, Roland L; Richards, Toby; Luengo-Fernandez, Ramon; Rothwell, Peter M; Brown, Martin M

    2016-06-01

    The International Carotid Stenting Study was a multicenter randomized trial in which patients with symptomatic carotid artery stenosis were randomly allocated to treatment by carotid stenting or endarterectomy. Economic evidence comparing these treatments is limited and inconsistent. We compared the cost-effectiveness of stenting versus endarterectomy using International Carotid Stenting Study data. We performed a cost-utility analysis estimating mean costs and quality-adjusted life years per patient for both treatments over a five-year time horizon based on resource use data and utility values collected in the trial. Costs of managing stroke events were estimated using individual patient data from a UK population-based study (Oxford Vascular Study). Mean costs per patient (95% CI) were US$10,477 ($9669 to $11,285) in the stenting group (N = 853) and $9669 ($8835 to $10,504) in the endarterectomy group (N = 857). There were no differences in mean quality-adjusted life years per patient (3.247 (3.160 to 3.333) and 3.228 (3.150 to 3.306), respectively). There were no differences in adjusted costs between groups (mean incremental costs for stenting versus endarterectomy $736 (95% CI -$353 to $1826)) or adjusted outcomes (mean quality-adjusted life years gained -0.010 (95% CI -0.117 to 0.097)). The incremental net monetary benefit for stenting versus endarterectomy was not significantly different from zero at the maximum willingness to pay for a quality-adjusted life year commonly used in the UK. Sensitivity analyses showed little uncertainty in these findings. Economic considerations should not affect whether patients with symptomatic carotid stenosis undergo stenting or endarterectomy. © 2016 World Stroke Organization.

  1. On the Asymmetric Zero-Range in the Rarefaction Fan

    NASA Astrophysics Data System (ADS)

    Gonçalves, Patrícia

    2014-02-01

    We consider one-dimensional asymmetric zero-range processes starting from a step decreasing profile leading, in the hydrodynamic limit, to the rarefaction fan of the associated hydrodynamic equation. Under that initial condition, and for totally asymmetric jumps, we show that the weighted sum of joint probabilities for second class particles sharing the same site is convergent and we compute its limit. For partially asymmetric jumps, we derive the Law of Large Numbers for a second class particle, under the initial configuration in which all positive sites are empty, all negative sites are occupied with infinitely many first class particles and there is a single second class particle at the origin. Moreover, we prove that among the infinite characteristics emanating from the position of the second class particle it picks randomly one of them. The randomness is given in terms of the weak solution of the hydrodynamic equation, through some sort of renormalization function. By coupling the constant-rate totally asymmetric zero-range with the totally asymmetric simple exclusion, we derive limiting laws for more general initial conditions.

  2. A zero waste vision for industrial networks in Europe.

    PubMed

    Curran, T; Williams, I D

    2012-03-15

    'ZeroWIN' (Towards Zero Waste in Industrial Networks--www.zerowin.eu) is a five year project running 2009-2014, funded by the EC under the 7th Framework Programme. Project ZeroWIN envisions industrial networks that have eliminated the wasteful consumption of resources. Zero waste is a unifying concept for a range of measures aimed at eliminating waste and challenging old ways of thinking. Aiming for zero waste will mean viewing waste as a potential resource with value to be realised, rather than as a problem to be dealt with. The ZeroWIN project will investigate and demonstrate how existing approaches and tools can be improved and combined to best effect in an industrial network, and how innovative technologies can contribute to achieving the zero waste vision. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Comparing statistical methods for analyzing skewed longitudinal count data with many zeros: an example of smoking cessation.

    PubMed

    Xie, Haiyi; Tao, Jill; McHugo, Gregory J; Drake, Robert E

    2013-07-01

    Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Knotted optical vortices in exact solutions to Maxwell's equations

    NASA Astrophysics Data System (ADS)

    de Klerk, Albertus J. J. M.; van der Veen, Roland I.; Dalhuisen, Jan Willem; Bouwmeester, Dirk

    2017-05-01

    We construct a family of exact solutions to Maxwell's equations in which the points of zero intensity form knotted lines topologically equivalent to a given but arbitrary algebraic link. These lines of zero intensity, more commonly referred to as optical vortices, and their topology are preserved as time evolves and the fields have finite energy. To derive explicit expressions for these new electromagnetic fields that satisfy the nullness property, we make use of the Bateman variables for the Hopf field as well as complex polynomials in two variables whose zero sets give rise to algebraic links. The class of algebraic links includes not only all torus knots and links thereof, but also more intricate cable knots. While the unknot has been considered before, the solutions presented here show that more general knotted structures can also arise as optical vortices in exact solutions to Maxwell's equations.

  5. Noniterative computation of infimum in H(infinity) optimisation for plants with invariant zeros on the j(omega)-axis

    NASA Technical Reports Server (NTRS)

    Chen, B. M.; Saber, A.

    1993-01-01

    A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.

  6. The diversity and unit of reactor noise theory

    NASA Astrophysics Data System (ADS)

    Kuang, Zhifeng

    The study of reactor noise theory concerns questions about cause and effect relationships, and utilisation of random noise in nuclear reactor systems. The diversity of reactor noise theory arises from the variety of noise sources, the various mathematical treatments applied and various practical purposes. The neutron noise in zero- energy systems arises from the fluctuations in the number of neutrons per fission, the time between nuclear events, and the type of reactions. It can be used to evaluate system parameters. The mathematical treatment is based on the master equation of stochastic branching processes. The noise in power reactor systems is given rise by random processes of technological origin such as vibration of mechanical parts, boiling of the coolant, fluctuations of temperature and pressure. It can be used to monitor reactor behaviour with the possibility of detecting malfunctions at an early stage. The mathematical treatment is based on the Langevin equation. The unity of reactor noise theory arises from the fact that useful information from noise is embedded in the second moments of random variables, which lends the possibility of building up a unified mathematical description and analysis of the various reactor noise sources. Exploring such possibilities is the main subject among the three major topics reported in this thesis. The first subject is within the zero power noise in steady media, and we reported on the extension of the existing theory to more general cases. In Paper I, by use of the master equation approach, we have derived the most general Feynman- and Rossi-alpha formulae so far by taking the full joint statistics of the prompt and all the six groups of delayed neutron precursors, and a multiple emission source into account. The involved problems are solved with a combination of effective analytical techniques and symbolic algebra codes (Mathematica). Paper II gives a numerical evaluation of these formulae. An assessment of the contribution of the terms that are novel as compared to the traditional formulae has been made. The second subject treats a problem in power reactor noise with the Langevin formalism. With a very few exceptions, in all previous work the diffusion approximation was used. In order to extend the treatment to transport theory, in Paper III, we introduced a novel method, i.e. Padé approximation via Lanczos algorithm to calculate the transfer function of a finite slab reactor described by one-group transport equation. It was found that the local-global decomposition of the neutron noise, formerly only reproduced in at least 2- group theory, can be reconstructed. We have also showed the existence of a boundary layer of the neutron noise close to the boundary. Finally, we have explored the possibility of building up a unified theory to account for the coexistence of zero power and power reactor noise in a system. In Paper IV, a unified description of the neutron noise is given by the use of backward master equations in a model where the cross section fluctuations are given as a simple binary pseudorandom process. The general solution contains both the zero power and power reactor noise concurrently, and they can be extracted individually as limiting cases of the general solution. It justified the separate treatments of zero power and power reactor noise. The result was extended to the case including one group of delayed neutron precursors in Paper V.

  7. Predicting active-layer soil thickness using topographic variables at a small watershed scale

    PubMed Central

    Li, Aidi; Tan, Xing; Wu, Wei; Liu, Hongbin; Zhu, Jie

    2017-01-01

    Knowledge about the spatial distribution of active-layer (AL) soil thickness is indispensable for ecological modeling, precision agriculture, and land resource management. However, it is difficult to obtain the details on AL soil thickness by using conventional soil survey method. In this research, the objective is to investigate the possibility and accuracy of mapping the spatial distribution of AL soil thickness through random forest (RF) model by using terrain variables at a small watershed scale. A total of 1113 soil samples collected from the slope fields were randomly divided into calibration (770 soil samples) and validation (343 soil samples) sets. Seven terrain variables including elevation, aspect, relative slope position, valley depth, flow path length, slope height, and topographic wetness index were derived from a digital elevation map (30 m). The RF model was compared with multiple linear regression (MLR), geographically weighted regression (GWR) and support vector machines (SVM) approaches based on the validation set. Model performance was evaluated by precision criteria of mean error (ME), mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). Comparative results showed that RF outperformed MLR, GWR and SVM models. The RF gave better values of ME (0.39 cm), MAE (7.09 cm), and RMSE (10.85 cm) and higher R2 (62%). The sensitivity analysis demonstrated that the DEM had less uncertainty than the AL soil thickness. The outcome of the RF model indicated that elevation, flow path length and valley depth were the most important factors affecting the AL soil thickness variability across the watershed. These results demonstrated the RF model is a promising method for predicting spatial distribution of AL soil thickness using terrain parameters. PMID:28877196

  8. The withholding of test results as a means of assessing the effectiveness of treatment in test-positive persons.

    PubMed

    Weiss, Noel S

    2013-04-01

    In recent years, a number of studies have achieved randomization of patients to alternative management strategies by blinding some patients (and their providers of medical care) to the results of tests that guide such strategies. Although this research approach has the potential to be a powerful means of measuring treatment effectiveness, the interpretation of the results may not be straightforward if the treatment received by test-positive persons is variable or not well documented, or if the analysis is not restricted to outcomes in test-positive persons. Studies in which the test results are withheld at random may face ethical issues that, to date, have received little discussion. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Effect of study design on the reported effect of cardiac resynchronization therapy (CRT) on quantitative physiological measures: stratified meta-analysis in narrow-QRS heart failure and implications for planning future studies.

    PubMed

    Jabbour, Richard J; Shun-Shin, Matthew J; Finegold, Judith A; Afzal Sohaib, S M; Cook, Christopher; Nijjer, Sukhjinder S; Whinnett, Zachary I; Manisty, Charlotte H; Brugada, Josep; Francis, Darrel P

    2015-01-06

    Biventricular pacing (CRT) shows clear benefits in heart failure with wide QRS, but results in narrow QRS have appeared conflicting. We tested the hypothesis that study design might have influenced findings. We identified all reports of CRT-P/D therapy in subjects with narrow QRS reporting effects on continuous physiological variables. Twelve studies (2074 patients) met these criteria. Studies were stratified by presence of bias-resistance steps: the presence of a randomized control arm over a single arm, and blinded outcome measurement. Change in each endpoint was quantified using a standardized effect size (Cohen's d). We conducted separate meta-analyses for each variable in turn, stratified by trial quality. In non-randomized, non-blinded studies, the majority of variables (10 of 12, 83%) showed significant improvement, ranging from a standardized mean effect size of +1.57 (95%CI +0.43 to +2.7) for ejection fraction to +2.87 (+1.78 to +3.95) for NYHA class. In the randomized, non-blinded study, only 3 out of 6 variables (50%) showed improvement. For the randomized blinded studies, 0 out of 9 variables (0%) showed benefit, ranging from -0.04 (-0.31 to +0.22) for ejection fraction to -0.1 (-0.73 to +0.53) for 6-minute walk test. Differences in degrees of resistance to bias, rather than choice of endpoint, explain the variation between studies of CRT in narrow-QRS heart failure addressing physiological variables. When bias-resistance features are implemented, it becomes clear that these patients do not improve in any tested physiological variable. Guidance from studies without careful planning to resist bias may be far less useful than commonly perceived. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  10. Transcranial Random Noise Stimulation of Visual Cortex: Stochastic Resonance Enhances Central Mechanisms of Perception.

    PubMed

    van der Groen, Onno; Wenderoth, Nicole

    2016-05-11

    Random noise enhances the detectability of weak signals in nonlinear systems, a phenomenon known as stochastic resonance (SR). Though counterintuitive at first, SR has been demonstrated in a variety of naturally occurring processes, including human perception, where it has been shown that adding noise directly to weak visual, tactile, or auditory stimuli enhances detection performance. These results indicate that random noise can push subthreshold receptor potentials across the transfer threshold, causing action potentials in an otherwise silent afference. Despite the wealth of evidence demonstrating SR for noise added to a stimulus, relatively few studies have explored whether or not noise added directly to cortical networks enhances sensory detection. Here we administered transcranial random noise stimulation (tRNS; 100-640 Hz zero-mean Gaussian white noise) to the occipital region of human participants. For increasing tRNS intensities (ranging from 0 to 1.5 mA), the detection accuracy of a visual stimuli changed according to an inverted-U-shaped function, typical of the SR phenomenon. When the optimal level of noise was added to visual cortex, detection performance improved significantly relative to a zero noise condition (9.7 ± 4.6%) and to a similar extent as optimal noise added to the visual stimuli (11.2 ± 4.7%). Our results demonstrate that adding noise to cortical networks can improve human behavior and that tRNS is an appropriate tool to exploit this mechanism. Our findings suggest that neural processing at the network level exhibits nonlinear system properties that are sensitive to the stochastic resonance phenomenon and highlight the usefulness of tRNS as a tool to modulate human behavior. Since tRNS can be applied to all cortical areas, exploiting the SR phenomenon is not restricted to the perceptual domain, but can be used for other functions that depend on nonlinear neural dynamics (e.g., decision making, task switching, response inhibition, and many other processes). This will open new avenues for using tRNS to investigate brain function and enhance the behavior of healthy individuals or patients. Copyright © 2016 the authors 0270-6474/16/365289-10$15.00/0.

  11. Signals of Opportunity Navigation Using Wi-Fi Signals

    DTIC Science & Technology

    2011-03-24

    Identifier . . . . . . . . . . . . . . . . . . . . . . . 54 MVM Mean Value Method . . . . . . . . . . . . . . . . . . . . . 60 SDM Scaled Differential...the mean value ( MVM ) and scaled differential (SDM) methods. An error was logged if the UI 60 correlation algorithm identified a packet index that did...Notable from this graph is that a window of 50 packets appears to provide zero errors for MVM and near zero errors for SDM. Also notable is that a

  12. Examining Solutions to Missing Data in Longitudinal Nursing Research

    PubMed Central

    Roberts, Mary B.; Sullivan, Mary C.; Winchester, Suzy B.

    2017-01-01

    Purpose Longitudinal studies are highly valuable in pediatrics because they provide useful data about developmental patterns of child health and behavior over time. When data are missing, the value of the research is impacted. The study’s purpose was to: (1) introduce a 3-step approach to assess and address missing data; (2) illustrate this approach using categorical and continuous level variables from a longitudinal study of premature infants. Methods A three-step approach with simulations was followed to assess the amount and pattern of missing data and to determine the most appropriate imputation method for the missing data. Patterns of missingness were Missing Completely at Random, Missing at Random, and Not Missing at Random. Missing continuous-level data were imputed using mean replacement, stochastic regression, multiple imputation, and fully conditional specification. Missing categorical-level data were imputed using last value carried forward, hot-decking, stochastic regression, and fully conditional specification. Simulations were used to evaluate these imputation methods under different patterns of missingness at different levels of missing data. Results The rate of missingness was 16–23% for continuous variables and 1–28% for categorical variables. Fully conditional specification imputation provided the least difference in mean and standard deviation estimates for continuous measures. Fully conditional specification imputation was acceptable for categorical measures. Results obtained through simulation reinforced and confirmed these findings. Practice Implications Significant investments are made in the collection of longitudinal data. The prudent handling of missing data can protect these investments and potentially improve the scientific information contained in pediatric longitudinal studies. PMID:28425202

  13. Comparison of Digital 12-Lead ECG and Digital 12-Lead Holter ECG Recordings in Healthy Male Subjects: Results from a Randomized, Double-Blinded, Placebo-Controlled Clinical Trial.

    PubMed

    Wang, Duolao; Bakhai, Ameet; Arezina, Radivoj; Täubel, Jörg

    2016-11-01

    Electrocardiogram (ECG) variability is greatly affected by the ECG recording method. This study aims to compare Holter and standard ECG recording methods in terms of central locations and variations of ECG data. We used the ECG data from a double-blinded, placebo-controlled, randomized clinical trial and used a mixed model approach to assess the agreement between two methods in central locations and variations of eight ECG parameters (Heart Rate, PR, QRS, QT, RR, QTcB, QTcF, and QTcI intervals). A total of 34 heathy male subjects with mean age of 25.7 ± 4.78 years were randomized to receive either active drug or placebo. Digital 12-lead ECG and digital 12-lead Holter ECG recordings were performed to assess ECG variability. There are no significant differences in least square mean between the Holter and the standard method for all ECG parameters. The total variance is consistently higher for the Holter method than the standard method for all ECG parameters except for QRS. The intraclass correlation coefficient (ICC) values for the Holter method are consistently lower than those for the standard method for all ECG parameters except for QRS, in particular, the ICC for QTcF is reduced from 0.86 for the standard method to 0.67 for the Holter method. This study suggests that Holter ECGs recorded in a controlled environment are not significantly different but more variable than those from the standard method. © 2016 Wiley Periodicals, Inc.

  14. A randomized controlled trial investigating the effects of craniosacral therapy on pain and heart rate variability in fibromyalgia patients.

    PubMed

    Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A; Sánchez-Labraca, Nuria; Quesada-Rubio, José Manuel; Granero-Molina, José; Moreno-Lorenzo, Carmen

    2011-01-01

    Fibromyalgia is a prevalent musculoskeletal disorder associated with widespread mechanical tenderness, fatigue, non-refreshing sleep, depressed mood and pervasive dysfunction of the autonomic nervous system: tachycardia, postural intolerance, Raynaud's phenomenon and diarrhoea. To determine the effects of craniosacral therapy on sensitive tender points and heart rate variability in patients with fibromyalgia. A randomized controlled trial. Ninety-two patients with fibromyalgia were randomly assigned to an intervention group or placebo group. Patients received treatments for 20 weeks. The intervention group underwent a craniosacral therapy protocol and the placebo group received sham treatment with disconnected magnetotherapy equipment. Pain intensity levels were determined by evaluating tender points, and heart rate variability was recorded by 24-hour Holter monitoring. After 20 weeks of treatment, the intervention group showed significant reduction in pain at 13 of the 18 tender points (P < 0.05). Significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement versus baseline values were observed in the intervention group but not in the placebo group. At two months and one year post therapy, the intervention group showed significant differences versus baseline in tender points at left occiput, left-side lower cervical, left epicondyle and left greater trochanter and significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement. Craniosacral therapy improved medium-term pain symptoms in patients with fibromyalgia.

  15. Broken symmetries, zero-energy modes, and quantum transport in disordered graphene: from supermetallic to insulating regimes.

    PubMed

    Cresti, Alessandro; Ortmann, Frank; Louvet, Thibaud; Van Tuan, Dinh; Roche, Stephan

    2013-05-10

    The role of defect-induced zero-energy modes on charge transport in graphene is investigated using Kubo and Landauer transport calculations. By tuning the density of random distributions of monovacancies either equally populating the two sublattices or exclusively located on a single sublattice, all conduction regimes are covered from direct tunneling through evanescent modes to mesoscopic transport in bulk disordered graphene. Depending on the transport measurement geometry, defect density, and broken sublattice symmetry, the Dirac-point conductivity is either exceptionally robust against disorder (supermetallic state) or suppressed through a gap opening or by algebraic localization of zero-energy modes, whereas weak localization and the Anderson insulating regime are obtained for higher energies. These findings clarify the contribution of zero-energy modes to transport at the Dirac point, hitherto controversial.

  16. Dirac directional emission in anisotropic zero refractive index photonic crystals.

    PubMed

    He, Xin-Tao; Zhong, Yao-Nan; Zhou, You; Zhong, Zhi-Chao; Dong, Jian-Wen

    2015-08-14

    A certain class of photonic crystals with conical dispersion is known to behave as isotropic zero-refractive-index medium. However, the discrete building blocks in such photonic crystals are limited to construct multidirectional devices, even for high-symmetric photonic crystals. Here, we show multidirectional emission from low-symmetric photonic crystals with semi-Dirac dispersion at the zone center. We demonstrate that such low-symmetric photonic crystal can be considered as an effective anisotropic zero-refractive-index medium, as long as there is only one propagation mode near Dirac frequency. Four kinds of Dirac multidirectional emitters are achieved with the channel numbers of five, seven, eleven, and thirteen, respectively. Spatial power combination for such kind of Dirac directional emitter is also verified even when multiple sources are randomly placed in the anisotropic zero-refractive-index photonic crystal.

  17. Dirac directional emission in anisotropic zero refractive index photonic crystals

    PubMed Central

    He, Xin-Tao; Zhong, Yao-Nan; Zhou, You; Zhong, Zhi-Chao; Dong, Jian-Wen

    2015-01-01

    A certain class of photonic crystals with conical dispersion is known to behave as isotropic zero-refractive-index medium. However, the discrete building blocks in such photonic crystals are limited to construct multidirectional devices, even for high-symmetric photonic crystals. Here, we show multidirectional emission from low-symmetric photonic crystals with semi-Dirac dispersion at the zone center. We demonstrate that such low-symmetric photonic crystal can be considered as an effective anisotropic zero-refractive-index medium, as long as there is only one propagation mode near Dirac frequency. Four kinds of Dirac multidirectional emitters are achieved with the channel numbers of five, seven, eleven, and thirteen, respectively. Spatial power combination for such kind of Dirac directional emitter is also verified even when multiple sources are randomly placed in the anisotropic zero-refractive-index photonic crystal. PMID:26271208

  18. A systematic examination of a random sampling strategy for source apportionment calculations.

    PubMed

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  20. Modeling and optimization of reductive degradation of chloramphenicol in aqueous solution by zero-valent bimetallic nanoparticles.

    PubMed

    Singh, Kunwar P; Singh, Arun K; Gupta, Shikha; Rai, Premanjali

    2012-07-01

    The present study aims to investigate the individual and combined effects of temperature, pH, zero-valent bimetallic nanoparticles (ZVBMNPs) dose, and chloramphenicol (CP) concentration on the reductive degradation of CP using ZVBMNPs in aqueous medium. Iron-silver ZVBMNPs were synthesized. Batch experimental data were generated using a four-factor statistical experimental design. CP reduction by ZVBMNPs was optimized using the response surface modeling (RSM) and artificial neural network-genetic algorithm (ANN-GA) approaches. The RSM and ANN methodologies were also compared for their predictive and generalization abilities using the same training and validation data set. Reductive by-products of CP were identified using liquid chromatography-mass spectrometry technique. The optimized process variables (RSM and ANN-GA approaches) yielded CP reduction capacity of 57.37 and 57.10 mg g(-1), respectively, as compared to the experimental value of 54.0 mg g(-1) with un-optimized variables. The ANN-GA and RSM methodologies yielded comparable results and helped to achieve a higher reduction (>6%) of CP by the ZVBMNPs as compared to the experimental value. The root mean squared error, relative standard error of prediction and correlation coefficient between the measured and model-predicted values of response variable were 1.34, 3.79, and 0.964 for RSM and 0.03, 0.07, and 0.999 for ANN models for the training and 1.39, 3.47, and 0.996 for RSM and 1.25, 3.11, and 0.990 for ANN models for the validation set. Predictive and generalization abilities of both the RSM and ANN models were comparable. The synthesized ZVBMNPs may be used for an efficient reductive removal of CP from the water.

  1. Timing at peak force may be the hidden target controlled in continuation and synchronization tapping.

    PubMed

    Du, Yue; Clark, Jane E; Whitall, Jill

    2017-05-01

    Timing control, such as producing movements at a given rate or synchronizing movements to an external event, has been studied through a finger-tapping task where timing is measured at the initial contact between finger and tapping surface or the point when a key is pressed. However, the point of peak force is after the time registered at the tapping surface and thus is a less obvious but still an important event during finger tapping. Here, we compared the time at initial contact with the time at peak force as participants tapped their finger on a force sensor at a given rate after the metronome was turned off (continuation task) or in synchrony with the metronome (sensorimotor synchronization task). We found that, in the continuation task, timing was comparably accurate between initial contact and peak force. These two timing events also exhibited similar trial-by-trial statistical dependence (i.e., lag-one autocorrelation). However, the central clock variability was lower at the peak force than the initial contact. In the synchronization task, timing control at peak force appeared to be less variable and more accurate than that at initial contact. In addition to lower central clock variability, the mean SE magnitude at peak force (SEP) was around zero while SE at initial contact (SEC) was negative. Although SEC and SEP demonstrated the same trial-by-trial statistical dependence, we found that participants adjusted the time of tapping to correct SEP, but not SEC, toward zero. These results suggest that timing at peak force is a meaningful target of timing control, particularly in synchronization tapping. This result may explain the fact that SE at initial contact is typically negative as widely observed in the preexisting literature.

  2. Comparison of Prophylactic Naftopidil, Tamsulosin, and Silodosin for {sup 125}I Brachytherapy-Induced Lower Urinary Tract Symptoms in Patients With Prostate Cancer: Randomized Controlled Trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsumura, Hideyasu, E-mail: sugan@pd5.so-net.ne.jp; Satoh, Takefumi; Ishiyama, Hiromichi

    2011-11-15

    Purpose: To compare the efficacy of three {alpha}{sub 1A}/{alpha}{sub 1D}-adrenoceptor (AR) antagonists-naftopidil, tamsulosin, and silodosin-that have differing affinities for the {alpha}{sub 1}-AR subtypes in treating urinary morbidities in Japanese men with {sup 125}I prostate implantation (PI) for prostate cancer. Methods and Materials: This single-institution prospective randomized controlled trial compared naftopidil, tamsulosin, and silodosin in patients undergoing PI. Patients were randomized and received either naftopidil, tamsulosin, or silodosin. Treatment began 1 day after PI and continued for 1 year. The primary efficacy variables were the changes in total International Prostate Symptom Score (IPSS) and postvoid residual urine (PVR). The secondary efficacymore » variables were changes in IPSS storage score and IPSS voiding score from baseline to set points during the study (1, 3, 6, and 12 months). Results: Two hundred twelve patients were evaluated in this study between June 2006 and February 2009: 71, 70, and 71 patients in the naftopidil, tamsulosin, and silodosin groups, respectively. With respect to the primary efficacy variables, the mean changes in the total IPSS at 1 month after PI in the naftopidil, tamsulosin, and silodosin groups were +10.3, +8.9, and +7.5, respectively. There were significantly greater decreases with silodosin than naftopidil at 1 month in the total IPSS. The mean changes in the PVR at 6 months were +14.6, +23.7, and +5.7 mL in the naftopidil, tamsulosin, and silodosin groups, respectively; silodosin showed a significant improvement in the PVR at 6 months vs. tamsulosin. With respect to the secondary efficacy variables, the mean changes in the IPSS voiding score at 1 month in the naftopidil, tamsulosin, and silodosin groups were +6.5, +5.6, and +4.5, respectively; silodosin showed a significant improvement in the IPSS voiding score at 1 month vs. naftopidil. Conclusions: Silodosin has a greater impact on improving PI-induced lower urinary tract symptoms than the other two agents.« less

  3. Coupling the Gaussian Free Fields with Free and with Zero Boundary Conditions via Common Level Lines

    NASA Astrophysics Data System (ADS)

    Qian, Wei; Werner, Wendelin

    2018-06-01

    We point out a new simple way to couple the Gaussian Free Field (GFF) with free boundary conditions in a two-dimensional domain with the GFF with zero boundary conditions in the same domain: Starting from the latter, one just has to sample at random all the signs of the height gaps on its boundary-touching zero-level lines (these signs are alternating for the zero-boundary GFF) in order to obtain a free boundary GFF. Constructions and couplings of the free boundary GFF and its level lines via soups of reflected Brownian loops and their clusters are also discussed. Such considerations show for instance that in a domain with an axis of symmetry, if one looks at the overlay of a single usual Conformal Loop Ensemble CLE3 with its own symmetric image, one obtains the CLE4-type collection of level lines of a GFF with mixed zero/free boundary conditions in the half-domain.

  4. Effect of Linagliptin Versus Metformin on Glycemic Variability in Patients with Impaired Glucose Tolerance.

    PubMed

    González-Heredia, Tonatiuh; Hernández-Corona, Diana M; González-Ortiz, Manuel; Martínez-Abundis, Esperanza

    2017-08-01

    Impaired glucose tolerance (IGT) and glycemic variability may be associated with increased risk of micro- and macrovascular complications. The aim of this study was to assess the effect of linagliptin versus metformin on glycemic variability in patients with IGT. A randomized, double-blind clinical trial with parallel groups was carried out in 16 adult patients with IGT, overweight or obesity. All patients signed an informed consent. The therapies were randomly assigned: (a) metformin 500 mg bid (n = 8) or (b) linagliptin 5 mg a.m. and placebo p.m. (n = 8), both for 90 days. At the beginning of the trial and 3 months later, fasting glucose, glycated hemoglobin A1c, oral glucose tolerance test (OGTT), and glycemic variability [area under the curve (AUC) of glucose, mean amplitude of glycemic excursion (MAGE), standard deviation (SD) of glucose, coefficient of variation (CV) of glucose, and mean blood glucose (MBG)] were measured. Mann-Whitney U, Wilcoxon, and Fisher exact tests were used for statistical analyses. Both groups were similar in basal characteristics. After linagliptin administration, a significant decrease in glucose levels at 120 min of OGTT (9.0 ± 0.9 vs. 6.9 ± 2.2 mmol/L, P = 0.012) was observed. Glycemic variability showed a similar behavior and there were no significant differences in the AUC, MAGE, SD of glucose, CV of glucose, and MBG between groups. Linagliptin administration resulted in better glycemic control according to the decrease of glucose levels by the OGTT at 120 min in patients with IGT. Meanwhile, glycemic variability was not modified in any of the study groups.

  5. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    PubMed

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  6. Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.

    2016-04-01

    We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.

  7. Mean deviation coupling synchronous control for multiple motors via second-order adaptive sliding mode control.

    PubMed

    Li, Lebao; Sun, Lingling; Zhang, Shengzhou

    2016-05-01

    A new mean deviation coupling synchronization control strategy is developed for multiple motor control systems, which can guarantee the synchronization performance of multiple motor control systems and reduce complexity of the control structure with the increasing number of motors. The mean deviation coupling synchronization control architecture combining second-order adaptive sliding mode control (SOASMC) approach is proposed, which can improve synchronization control precision of multiple motor control systems and make speed tracking errors, mean speed errors of each motor and speed synchronization errors converge to zero rapidly. The proposed control scheme is robustness to parameter variations and random external disturbances and can alleviate the chattering phenomena. Moreover, an adaptive law is employed to estimate the unknown bound of uncertainty, which is obtained in the sense of Lyapunov stability theorem to minimize the control effort. Performance comparisons with master-slave control, relative coupling control, ring coupling control, conventional PI control and SMC are investigated on a four-motor synchronization control system. Extensive comparative results are given to shown the good performance of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A Meta-Analysis of Massage Therapy Research

    ERIC Educational Resources Information Center

    Moyer, Christopher A.; Rounds, James; Hannum, James W.

    2004-01-01

    Massage therapy (MT) is an ancient form of treatment that is now gaining popularity as part of the complementary and alternative medical therapy movement. A meta-analysis was conducted of studies that used random assignment to test the effectiveness of MT. Mean effect sizes were calculated from 37 studies for 9 dependent variables. Single…

  9. Convenience Samples and Caregiving Research: How Generalizable Are the Findings?

    ERIC Educational Resources Information Center

    Pruchno, Rachel A.; Brill, Jonathan E.; Shands, Yvonne; Gordon, Judith R.; Genderson, Maureen Wilson; Rose, Miriam; Cartwright, Francine

    2008-01-01

    Purpose: We contrast characteristics of respondents recruited using convenience strategies with those of respondents recruited by random digit dial (RDD) methods. We compare sample variances, means, and interrelationships among variables generated from the convenience and RDD samples. Design and Methods: Women aged 50 to 64 who work full time and…

  10. On the null distribution of Bayes factors in linear regression

    USDA-ARS?s Scientific Manuscript database

    We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...

  11. [Relationships between well-being and social support: a meta analysis of studies conducted in Turkey].

    PubMed

    Yalçın, İlhan

    2015-01-01

    The purpose of this study was to investigate overall relationships between well-being and social support through meta-analysis. Studies which investigated associations between social support and life satisfaction, subjective well-being, self-esteem, depression, loneliness were included in the meta-analysis. By doing literature review to assess studies for potential inclusion; studies were included which met the inclusion criteria. Inclusion criteria were that studies must be conducted in Turkey and must report a correlation coefficient between study variables. Data were analyzed using a random effect model. It was found that there was a positive relationship between overall well-being and social support; level of social support was negatively correlated with depression and loneliness. For well-being variables, the mean effect size of perceived support from family and for depression/loneliness, the mean effect size of perceived support from friends were significantly stronger than other support sources. For both well-being variables and depression/loneliness variables, mean effect size of studies conducted with older people was significantly stronger than studies conducted with other age groups. Also, mean effect size of theses were significantly stronger than articles. The findings are expected to contribute to a better understanding of relationships between social support and well-being.

  12. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.

  13. Continuously-Variable Positive-Mesh Power Transmission

    NASA Technical Reports Server (NTRS)

    Johnson, J. L.

    1982-01-01

    Proposed transmission with continuously-variable speed ratio couples two mechanical trigonometric-function generators. Transmission is expected to handle higher loads than conventional variable-pulley drives; and, unlike variable pulley, positive traction through entire drive train with no reliance on friction to transmit power. Able to vary speed continuously through zero and into reverse. Possible applications in instrumentation where drive-train slippage cannot be tolerated.

  14. Zero-temperature directed polymer in random potential in 4+1 dimensions.

    PubMed

    Kim, Jin Min

    2016-12-01

    Zero-temperature directed polymer in random potential in 4+1 dimensions is described. The fluctuation ΔE(t) of the lowest energy of the polymer varies as t^{β} with β=0.159±0.007 for polymer length t and ΔE follows ΔE(L)∼L^{α} at saturation with α=0.275±0.009, where L is the system size. The dynamic exponent z≈1.73 is obtained from z=α/β. The estimated values of the exponents satisfy the scaling relation α+z=2 very well. We also monitor the end to end distance of the polymer and obtain z independently. Our results show that the upper critical dimension of the Kardar-Parisi-Zhang equation is higher than d=4+1 dimensions.

  15. Zero adjusted models with applications to analysing helminths count data.

    PubMed

    Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N

    2014-11-27

    It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.

  16. Phase transitions in Ising models on directed networks

    NASA Astrophysics Data System (ADS)

    Lipowski, Adam; Ferreira, António Luis; Lipowska, Dorota; Gontarek, Krzysztof

    2015-11-01

    We examine Ising models with heat-bath dynamics on directed networks. Our simulations show that Ising models on directed triangular and simple cubic lattices undergo a phase transition that most likely belongs to the Ising universality class. On the directed square lattice the model remains paramagnetic at any positive temperature as already reported in some previous studies. We also examine random directed graphs and show that contrary to undirected ones, percolation of directed bonds does not guarantee ferromagnetic ordering. Only above a certain threshold can a random directed graph support finite-temperature ferromagnetic ordering. Such behavior is found also for out-homogeneous random graphs, but in this case the analysis of magnetic and percolative properties can be done exactly. Directed random graphs also differ from undirected ones with respect to zero-temperature freezing. Only at low connectivity do they remain trapped in a disordered configuration. Above a certain threshold, however, the zero-temperature dynamics quickly drives the model toward a broken symmetry (magnetized) state. Only above this threshold, which is almost twice as large as the percolation threshold, do we expect the Ising model to have a positive critical temperature. With a very good accuracy, the behavior on directed random graphs is reproduced within a certain approximate scheme.

  17. Extension of the Haseman-Elston regression model to longitudinal data.

    PubMed

    Won, Sungho; Elston, Robert C; Park, Taesung

    2006-01-01

    We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.

  18. The hemodynamic effects of intravenous paracetamol (acetaminophen) vs normal saline in cardiac surgery patients: A single center placebo controlled randomized study

    PubMed Central

    Churilov, Leonid

    2018-01-01

    The hemodynamic effects of intravenous (IV) paracetamol in patients undergoing cardiac surgery are unknown. We performed a prospective single center placebo controlled randomized study with parallel group design in adult patients undergoing elective cardiac surgery. Participants received paracetamol (1 gram) IV or placebo (an equal volume of 0.9% saline) preoperatively followed by two postoperative doses 6 hours apart. The primary endpoint was the absolute change in systolic (SBP) 30 minutes after the preoperative infusion, analysed using an ANCOVA model. Secondary endpoints included absolute changes in mean arterial pressure (MAP) and diastolic blood pressure (DPB), and other key hemodynamic variables after each infusion. All other endpoints were analysed using random-effect generalized least squares regression modelling with individual patients treated as random effects. Fifty participants were randomly assigned to receive paracetamol (n = 25) or placebo (n = 25). Post preoperative infusion, paracetamol decreased SBP by a mean (SD) of 13 (18) mmHg, p = 0.02, compared to a mean (SD) of 1 (11) mmHg with saline. Paracetamol decreased MAP and DBP by a mean (SD) of 9 (12) mmHg and 8 (9) mmHg (p = 0.01 and 0.02), respectively, compared to a mean (SD) of 1 (8) mmHg and 0 (6) mmHg with placebo. Postoperatively, there were no significant differences in pressure or flow based hemodynamic parameters in both groups. This study provides high quality evidence that the administration of IV paracetamol in patients undergoing cardiac surgery causes a transient decrease in preoperative blood pressure when administered before surgery but no adverse hemodynamic effects when administered in the postoperative setting. PMID:29659631

  19. The hemodynamic effects of intravenous paracetamol (acetaminophen) vs normal saline in cardiac surgery patients: A single center placebo controlled randomized study.

    PubMed

    Chiam, Elizabeth; Bellomo, Rinaldo; Churilov, Leonid; Weinberg, Laurence

    2018-01-01

    The hemodynamic effects of intravenous (IV) paracetamol in patients undergoing cardiac surgery are unknown. We performed a prospective single center placebo controlled randomized study with parallel group design in adult patients undergoing elective cardiac surgery. Participants received paracetamol (1 gram) IV or placebo (an equal volume of 0.9% saline) preoperatively followed by two postoperative doses 6 hours apart. The primary endpoint was the absolute change in systolic (SBP) 30 minutes after the preoperative infusion, analysed using an ANCOVA model. Secondary endpoints included absolute changes in mean arterial pressure (MAP) and diastolic blood pressure (DPB), and other key hemodynamic variables after each infusion. All other endpoints were analysed using random-effect generalized least squares regression modelling with individual patients treated as random effects. Fifty participants were randomly assigned to receive paracetamol (n = 25) or placebo (n = 25). Post preoperative infusion, paracetamol decreased SBP by a mean (SD) of 13 (18) mmHg, p = 0.02, compared to a mean (SD) of 1 (11) mmHg with saline. Paracetamol decreased MAP and DBP by a mean (SD) of 9 (12) mmHg and 8 (9) mmHg (p = 0.01 and 0.02), respectively, compared to a mean (SD) of 1 (8) mmHg and 0 (6) mmHg with placebo. Postoperatively, there were no significant differences in pressure or flow based hemodynamic parameters in both groups. This study provides high quality evidence that the administration of IV paracetamol in patients undergoing cardiac surgery causes a transient decrease in preoperative blood pressure when administered before surgery but no adverse hemodynamic effects when administered in the postoperative setting.

  20. A novel quantum-mechanical interpretation of the Dirac equation

    NASA Astrophysics Data System (ADS)

    K-H Kiessling, M.; Tahvildar-Zadeh, A. S.

    2016-04-01

    A novel interpretation is given of Dirac’s ‘wave equation for the relativistic electron’ as a quantum-mechanical one-particle equation. In this interpretation the electron and the positron are merely the two different ‘topological spin’ states of a single more fundamental particle, not distinct particles in their own right. The new interpretation is backed up by the existence of such ‘bi-particle’ structures in general relativity, in particular the ring singularity present in any spacelike section of the spacetime singularity of the maximal-analytically extended, topologically non-trivial, electromagnetic Kerr-Newman (KN)spacetime in the zero-gravity limit (here, ‘zero-gravity’ means the limit G\\to 0, where G is Newton’s constant of universal gravitation). This novel interpretation resolves the dilemma that Dirac’s wave equation seems to be capable of describing both the electron and the positron in ‘external’ fields in many relevant situations, while the bi-spinorial wave function has only a single position variable in its argument, not two—as it should if it were a quantum-mechanical two-particle wave equation. A Dirac equation is formulated for such a ring-like bi-particle which interacts with a static point charge located elsewhere in the topologically non-trivial physical space associated with the moving ring particle, the motion being governed by a de Broglie-Bohm type law extracted from the Dirac equation. As an application, the pertinent general-relativistic zero-gravity hydrogen problem is studied in the usual Born-Oppenheimer approximation. Its spectral results suggest that the zero-G KN magnetic moment be identified with the so-called ‘anomalous magnetic moment of the physical electron,’ not with the Bohr magneton, so that the ring radius is only a tiny fraction of the electron’s reduced Compton wavelength.

  1. A pragmatic randomized controlled trial to evaluate the effectiveness of a facilitated exercise intervention as a treatment for postnatal depression: the PAM-PeRS trial.

    PubMed

    Daley, A J; Blamey, R V; Jolly, K; Roalfe, A K; Turner, K M; Coleman, S; McGuinness, M; Jones, I; Sharp, D J; MacArthur, C

    2015-08-01

    Postnatal depression affects about 10-15% of women in the year after giving birth. Many women and healthcare professionals would like an effective and accessible non-pharmacological treatment for postnatal depression. Women who fulfilled the International Classification of Diseases (ICD)-10 criteria for major depression in the first 6 months postnatally were randomized to receive usual care plus a facilitated exercise intervention or usual care only. The intervention involved two face-to-face consultations and two telephone support calls with a physical activity facilitator over 6 months to support participants to engage in regular exercise. The primary outcome was symptoms of depression using the Edinburgh Postnatal Depression Scale (EPDS) at 6 months post-randomization. Secondary outcomes included EPDS score as a binary variable (recovered and improved) at 6 and 12 months post-randomization. A total of 146 women were potentially eligible and 94 were randomized. Of these, 34% reported thoughts of self-harming at baseline. After adjusting for baseline EPDS, analyses revealed a -2.04 mean difference in EPDS score, favouring the exercise group [95% confidence interval (CI) -4.11 to 0.03, p = 0.05]. When also adjusting for pre-specified demographic variables the effect was larger and statistically significant (mean difference = -2.26, 95% CI -4.36 to -0.16, p = 0.03). Based on EPDS score a larger proportion of the intervention group was recovered (46.5% v. 23.8%, p = 0.03) compared with usual care at 6 months follow-up. This trial shows that an exercise intervention that involved encouragement to exercise and to seek out social support to exercise may be an effective treatment for women with postnatal depression, including those with thoughts of self-harming.

  2. The supersymmetric method in random matrix theory and applications to QCD

    NASA Astrophysics Data System (ADS)

    Verbaarschot, Jacobus

    2004-12-01

    The supersymmetric method is a powerful method for the nonperturbative evaluation of quenched averages in disordered systems. Among others, this method has been applied to the statistical theory of S-matrix fluctuations, the theory of universal conductance fluctuations and the microscopic spectral density of the QCD Dirac operator. We start this series of lectures with a general review of Random Matrix Theory and the statistical theory of spectra. An elementary introduction of the supersymmetric method in Random Matrix Theory is given in the second and third lecture. We will show that a Random Matrix Theory can be rewritten as an integral over a supermanifold. This integral will be worked out in detail for the Gaussian Unitary Ensemble that describes level correlations in systems with broken time-reversal invariance. We especially emphasize the role of symmetries. As a second example of the application of the supersymmetric method we discuss the calculation of the microscopic spectral density of the QCD Dirac operator. This is the eigenvalue density near zero on the scale of the average level spacing which is known to be given by chiral Random Matrix Theory. Also in this case we use symmetry considerations to rewrite the generating function for the resolvent as an integral over a supermanifold. The main topic of the second last lecture is the recent developments on the relation between the supersymmetric partition function and integrable hierarchies (in our case the Toda lattice hierarchy). We will show that this relation is an efficient way to calculate superintegrals. Several examples that were given in previous lectures will be worked out by means of this new method. Finally, we will discuss the quenched QCD Dirac spectrum at nonzero chemical potential. Because of the nonhermiticity of the Dirac operator the usual supersymmetric method has not been successful in this case. However, we will show that the supersymmetric partition function can be evaluated by means of the replica limit of the Toda lattice equation.

  3. Variation of normal tissue complication probability (NTCP) estimates of radiation-induced hypothyroidism in relation to changes in delineation of the thyroid gland.

    PubMed

    Rønjom, Marianne F; Brink, Carsten; Lorenzen, Ebbe L; Hegedüs, Laszlo; Johansen, Jørgen

    2015-01-01

    To examine the variations of risk-estimates of radiation-induced hypothyroidism (HT) from our previously developed normal tissue complication probability (NTCP) model in patients with head and neck squamous cell carcinoma (HNSCC) in relation to variability of delineation of the thyroid gland. In a previous study for development of an NTCP model for HT, the thyroid gland was delineated in 246 treatment plans of patients with HNSCC. Fifty of these plans were randomly chosen for re-delineation for a study of the intra- and inter-observer variability of thyroid volume, Dmean and estimated risk of HT. Bland-Altman plots were used for assessment of the systematic (mean) and random [standard deviation (SD)] variability of the three parameters, and a method for displaying the spatial variation in delineation differences was developed. Intra-observer variability resulted in a mean difference in thyroid volume and Dmean of 0.4 cm(3) (SD ± 1.6) and -0.5 Gy (SD ± 1.0), respectively, and 0.3 cm(3) (SD ± 1.8) and 0.0 Gy (SD ± 1.3) for inter-observer variability. The corresponding mean differences of NTCP values for radiation-induced HT due to intra- and inter-observer variations were insignificantly small, -0.4% (SD ± 6.0) and -0.7% (SD ± 4.8), respectively, but as the SDs show, for some patients the difference in estimated NTCP was large. For the entire study population, the variation in predicted risk of radiation-induced HT in head and neck cancer was small and our NTCP model was robust against observer variations in delineation of the thyroid gland. However, for the individual patient, there may be large differences in estimated risk which calls for precise delineation of the thyroid gland to obtain correct dose and NTCP estimates for optimized treatment planning in the individual patient.

  4. Diversity in sound pressure levels and estimated active space of resident killer whale vocalizations.

    PubMed

    Miller, Patrick J O

    2006-05-01

    Signal source intensity and detection range, which integrates source intensity with propagation loss, background noise and receiver hearing abilities, are important characteristics of communication signals. Apparent source levels were calculated for 819 pulsed calls and 24 whistles produced by free-ranging resident killer whales by triangulating the angles-of-arrival of sounds on two beamforming arrays towed in series. Levels in the 1-20 kHz band ranged from 131 to 168 dB re 1 microPa at 1 m, with differences in the means of different sound classes (whistles: 140.2+/-4.1 dB; variable calls: 146.6+/-6.6 dB; stereotyped calls: 152.6+/-5.9 dB), and among stereotyped call types. Repertoire diversity carried through to estimates of active space, with "long-range" stereotyped calls all containing overlapping, independently-modulated high-frequency components (mean estimated active space of 10-16 km in sea state zero) and "short-range" sounds (5-9 km) included all stereotyped calls without a high-frequency component, whistles, and variable calls. Short-range sounds are reported to be more common during social and resting behaviors, while long-range stereotyped calls predominate in dispersed travel and foraging behaviors. These results suggest that variability in sound pressure levels may reflect diverse social and ecological functions of the acoustic repertoire of killer whales.

  5. Meta-analysis of time-series studies of air pollution and mortality: effects of gases and particles and the influence of cause of death, age, and season.

    PubMed

    Stieb, David M; Judek, Stan; Burnett, Richard T

    2002-04-01

    A comprehensive, systematic synthesis was conducted of daily time-series studies of air pollution and mortality from around the world. Estimates of effect sizes were extracted from 109 studies, from single- and multipollutant models, and by cause of death, age, and season. Random effects pooled estimates of excess all-cause mortality (single-pollutant models) associated with a change in pollutant concentration equal to the mean value among a representative group of cities were 2.0% (95% CI 1.5-2.4%) per 31.3 microg/m3 particulate matter (PM) of median diameter < or = 10 microm (PM10); 1.7% (1.2-2.2%) per 1.1 ppm CO; 2.8% (2.1-3.5%) per 24.0 ppb NO2; 1.6% (1.1-2.0%) per 31.2 ppb O3; and 0.9% (0.7-1.2%) per 9.4 ppb SO2 (daily maximum concentration for O3, daily average for others). Effect sizes were generally reduced in multipollutant models, but remained significantly different from zero for PM10 and SO2. Larger effect sizes were observed for respiratory mortality for all pollutants except O3. Heterogeneity among studies was partially accounted for by differences in variability of pollutant concentrations, and results were robust to alternative approaches to selecting estimates from the pool of available candidates. This synthesis leaves little doubt that acute air pollution exposure is a significant contributor to mortality.

  6. Acute effects of elastic bands during the free-weight barbell back squat exercise on velocity, power, and force production.

    PubMed

    Stevenson, Mark W; Warpeha, Joseph M; Dietz, Cal C; Giveans, Russell M; Erdman, Arthur G

    2010-11-01

    The use of elastic bands in resistance training has been reported to be effective in increasing performance-related parameters such as power, rate of force development (RFD), and velocity. The purpose of this study was to assess the following measures during the free-weight back squat exercise with and without elastic bands: peak and mean velocity in the eccentric and concentric phases (PV-E, PV-C, MV-E, MV-C), peak force (PF), peak power in the concentric phase, and RFD immediately before and after the zero-velocity point and in the concentric phase (RFDC). Twenty trained male volunteers (age = 26.0 ± 4.4 years) performed 3 sets of 3 repetitions of squats (at 55% one repetition maximum [1RM]) on 2 separate days: 1 day without bands and the other with bands in a randomized order. The added band force equaled 20% of the subjects' 55% 1RM. Two independent force platforms collected ground reaction force data, and a 9-camera motion capture system was used for displacement measurements. The results showed that PV-E and RFDC were significantly (p < 0.05) greater with the use of bands, whereas PV-C and MV-C were greater without bands. There were no differences in any other variables. These results indicate that there may be benefits to performing squats with elastic bands in terms of RFD. Practitioners concerned with improving RFD may want to consider incorporating this easily implemented training variation.

  7. Statistical and hydrodynamic properties of double-ring polymers with a fixed linking number between twin rings.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2014-01-28

    For a double-ring polymer in solution we evaluate the mean-square radius of gyration and the diffusion coefficient through simulation of off-lattice self-avoiding double polygons consisting of cylindrical segments with radius rex of unit length. Here, a self-avoiding double polygon consists of twin self-avoiding polygons which are connected by a cylindrical segment. We show numerically that several statistical and dynamical properties of double-ring polymers in solution depend on the linking number of the constituent twin ring polymers. The ratio of the mean-square radius of gyration of self-avoiding double polygons with zero linking number to that of no topological constraint is larger than 1, in particular, when the radius of cylindrical segments rex is small. However, the ratio is almost constant with respect to the number of vertices, N, and does not depend on N. The large-N behavior of topological swelling is thus quite different from the case of knotted random polygons.

  8. Nonconventional screening of the Coulomb interaction in FexOy clusters: An ab initio study

    NASA Astrophysics Data System (ADS)

    Peters, L.; Şaşıoǧlu, E.; Rossen, S.; Friedrich, C.; Blügel, S.; Katsnelson, M. I.

    2017-04-01

    From microscopic point-dipole model calculations of the screening of the Coulomb interaction in nonpolar systems by polarizable atoms, it is known that screening strongly depends on dimensionality. For example, in one-dimensional systems, the short-range interaction is screened, while the long-range interaction is antiscreened. This antiscreening is also observed in some zero-dimensional structures, i.e., molecular systems. By means of ab initio calculations in conjunction with the random-phase approximation (RPA) within the FLAPW method, we study screening of the Coulomb interaction in FexOy clusters. For completeness, these results are compared with their bulk counterpart magnetite. It appears that the on-site Coulomb interaction is very well screened both in the clusters and bulk. On the other hand, for the intersite Coulomb interaction, the important observation is made that it is almost constant throughout the clusters, while for the bulk it is almost completely screened. More precisely and interestingly, in the clusters antiscreening is observed by means of ab initio calculations.

  9. Convenience samples and caregiving research: how generalizable are the findings?

    PubMed

    Pruchno, Rachel A; Brill, Jonathan E; Shands, Yvonne; Gordon, Judith R; Genderson, Maureen Wilson; Rose, Miriam; Cartwright, Francine

    2008-12-01

    We contrast characteristics of respondents recruited using convenience strategies with those of respondents recruited by random digit dial (RDD) methods. We compare sample variances, means, and interrelationships among variables generated from the convenience and RDD samples. Women aged 50 to 64 who work full time and provide care to a community-dwelling older person were recruited using either RDD (N = 55) or convenience methods (N = 87). Telephone interviews were conducted using reliable, valid measures of demographics, characteristics of the care recipient, help provided to the care recipient, evaluations of caregiver-care recipient relationship, and outcomes common to caregiving research. Convenience and RDD samples had similar variances on 68.4% of the examined variables. We found significant mean differences for 63% of the variables examined. Bivariate correlations suggest that one would reach different conclusions using the convenience and RDD sample data sets. Researchers should use convenience samples cautiously, as they may have limited generalizability.

  10. 40 CFR 1066.310 - Coastdown procedures for vehicles above 14,000 pounds GVWR.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Calibrate the equipment by running the zero-wind and zero-angle calibrations within 24 hours before... = mean ambient absolute temperature during testing, in K. p act = average ambient pressuring during the...

  11. Zero entropy continuous interval maps and MMLS-MMA property

    NASA Astrophysics Data System (ADS)

    Jiang, Yunping

    2018-06-01

    We prove that the flow generated by any continuous interval map with zero topological entropy is minimally mean-attractable and minimally mean-L-stable. One of the consequences is that any oscillating sequence is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy. In particular, the Möbius function is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy (Sarnak’s conjecture for continuous interval maps). Another consequence is a non-trivial example of a flow having discrete spectrum. We also define a log-uniform oscillating sequence and show a result in ergodic theory for comparison. This material is based upon work supported by the National Science Foundation. It is also partially supported by a collaboration grant from the Simons Foundation (grant number 523341) and PSC-CUNY awards and a grant from NSFC (grant number 11571122).

  12. Image-Subtraction Photometry of Variable Stars in the Field of the Globular Cluster NGC 6934

    NASA Astrophysics Data System (ADS)

    Kaluzny, J.; Olech, A.; Stanek, K. Z.

    2001-03-01

    We present CCD BVI photometry of 85 variable stars from the field of the globular cluster NGC 6934. The photometry was obtained with the image subtraction package ISIS. 35 variables are new identifications: 24 RRab stars, five RRc stars, two eclipsing binaries of W UMa-type, one SX Phe star, and three variables of other types. Both detected contact binaries are foreground stars. The SX Phe variable belongs most likely to the group of cluster blue stragglers. Large number of newly found RR Lyr variables in this cluster, as well as in other clusters recently observed by us, indicates that total RR Lyr population identified up to date in nearby galactic globular clusters is significantly (>30%) incomplete. Fourier decomposition of the light curves of RR Lyr variables was used to estimate the basic properties of these stars. From the analysis of RRc variables we obtain a mean mass of M=0.63 Msolar, luminosity logL/Lsolar=1.72, effective temperature Teff=7300 and helium abundance Y=0.27. The mean values of the absolute magnitude, metallicity (on Zinn's scale) and effective temperature for RRab variables are MV=0.81, [Fe/H]=-1.53 and Teff=6450, respectively. From the B-V color at minimum light of the RRab variables we obtained the color excess to NGC 6934 equal to E(B-V)=0.09+/-0.01. Different calibrations of absolute magnitudes of RRab and RRc available in literature were used to estimate apparent distance modulus of the cluster: (m-M)V=16.09+/-0.06. We note a likely error in the zero point of the HST-based V-band photometry of NGC 6934 recently presented by Piotto et al. Among analyzed sample of RR Lyr stars we have detected a short period and low amplitude variable which possibly belongs to the group of second overtone pulsators (RRe subtype variables). The BVI photometry of all variables is available electronically via anonymous ftp. The complete set of the CCD frames is available upon request. Based on observations obtained with the 1.2 m Telescope at the F. L. Whipple Observatory of the Harvard-Smithsonian Center for Astrophysics.

  13. Mesoscale model response to random, surface-based perturbations — A sea-breeze experiment

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.; Pielke, R. A.; Miller, W. F.; Lee, T. J.

    1990-09-01

    The introduction into a mesoscale model of random (in space) variations in roughness length, or random (in space and time) surface perturbations of temperature and friction velocity, produces a measurable, but barely significant, response in the simulated flow dynamics of the lower atmosphere. The perturbations are an attempt to include the effects of sub-grid variability into the ensemble-mean parameterization schemes used in many numerical models. Their magnitude is set in our experiments by appeal to real-world observations of the spatial variations in roughness length and daytime surface temperature over the land on horizontal scales of one to several tens of kilometers. With sea-breeze simulations, comparisons of a number of realizations forced by roughness-length and surface-temperature perturbations with the standard simulation reveal no significant change in ensemble mean statistics, and only small changes in the sea-breeze vertical velocity. Changes in the updraft velocity for individual runs, of up to several cms-1 (compared to a mean of 14 cms-1), are directly the result of prefrontal temperature changes of 0.1 to 0.2K, produced by the random surface forcing. The correlation and magnitude of the changes are entirely consistent with a gravity-current interpretation of the sea breeze.

  14. Water resources of the Fort Berthold Indian Reservation, west-central North Dakota

    USGS Publications Warehouse

    Cates, Steven W.; Macek-Rowland, Kathleen M.

    1998-01-01

    Water resources of the Fort Berthold Indian Reservation in west-central North Dakota occur as ground water in bedrock and buried-valley aquifers and as surface water in streams and Lake Sakakawea. The bedrock aquifers-the Fox Hills-Hell Creek, Tongue River, and Sentinel Butte store about 93 million acre-feet of water under the Reservation. The Fox Hills-Hell Creek aquifer is composed mainly of very fine to medium-grained sandstone and stores about 51 million acrefeet of water. Water levels in the aquifer declined from 1976 through 1992. The Tongue River aquifer is composed mainly of claystones and siltstones and has widely distributed pockets of sandstone or lignite layers. The aquifer stores about 24 million acre-feet of water. The Sentinel Butte aquifer is composed mainly of interbedded claystones, siltstones, shale, lignite, and sandstone and stores about 18 million acre-feet of water. Yields from the lignite beds are highly variable. Water in the aquifers was predominantly a sodium bicarbonate type. Mean dissolved solids concentrations were 1,530 milligrams per liter in water from the Fox Hills-Hell Creek aquifer, 2,110 milligrams per liter in water from the Tongue River aquifer, and 1,300 milligrams per liter in water from the Sentinel Butte aquifer. The East Fork Shell Creek, Shell Creek, White Shield, New Town, and Sanish aquifers occur within buried valleys and store about 1,414,000 acre-feet of water. The East Fork Shell Creek and Shell Creek aquifers are composed of sand and gravel lenses that are surrounded by less permeable till. Water in the East Fork Shell Creek aquifer is a sodium sulfate bicarbonate type, and water in the Shell Creek aquifer is a sodium bicarbonate sulfate type. Mean dissolved-solids concentrations were 3,220 milligrams per liter in water from the East Fork Shell Creek aquifer and 1,470 milligrams per liter in water from the Shell Creek aquifer.The White Shield aquifer is composed of very fine to coarse sand and fine to coarse gravel. Water in the aquifer varies from a sodium bicarbonate sulfate type to a mixed calcium magnesium sodium bicarbonate sulfate type. Mean dissolved-solids concentrations were 1,080 milligrams per liter in water from the eastern part of the aquifer and 1,430 milligrams per liter in water from the western part of the aquifer. Water levels in the western part of the aquifer rose during 1970-92. The New Town aquifer is composed of lenticular deposits of sand and gravel. Water in the aquifer is a calcium sodium bicarbonate sulfate type and had a mean dissolved-solids concentration of 1,390 milligrams per liter. Data indicate a close correspondence between ground-water levels and lake stage of Lake Sakakawea, implying a hydraulic connection between the aquifer and the lake.The Sanish aquifer is composed of sand, clayey sand, and thin gravels that are poorly cemented and highly permeable. Water in the aquifer is a mixed calcium magnesium bicarbonate sulfate type and had a mean dissolved-solids concentration of 1,350 milligrams per liter.Major streams on the Reservation are Bear Den Creek, Shell Creek, East Fork Shell Creek, Deepwater Creek, Moccasin Creek, and Squaw Creek. Mean streamflow for Bear Den Creek for June 1966 through September 1992 was 6.72 cubic feet per second. Mean streamflow for Shell Creek for September 1965 through September 1981 was 12.9 cubic feet per second. Streamflow measurements for East Fork Shell Creek for April 1990 through June 1991 ranged from zero to 3.65 cubic feet per second, measurements for Deepwater Creek for April 1990 through May 1991 ranged from zero to 4.28 cubic feet per second, measurements for Moccasin Creek for April 1990 through September 1992 ranged from zero to 7.07 cubic feet per second, and measurements for Squaw Creek for April 1990 through September 1992 ranged from zero to 4.22 cubic feet per second. Lake Sakakawea has a maximum surface area of 390,000 acres. The surface area is variable in relation to lake stage, which was unusually low during this study. The mean lake elevation for Lake Sakakawea for 1970-92 was 1,837.08 feet, and the mean lake elevation for 1990-92 was 1,821.14 feet.

  15. Long-term and within-day variability of working memory performance and EEG in individuals.

    PubMed

    Gevins, Alan; McEvoy, Linda K; Smith, Michael E; Chan, Cynthia S; Sam-Vargas, Lita; Baum, Cliff; Ilan, Aaron B

    2012-07-01

    Assess individual-subject long-term and within-day variability of a combined behavioral and EEG test of working memory. EEGs were recorded from 16 adults performing n-back working memory tasks, with 10 tested in morning and afternoon sessions over several years. Participants were also tested after ingesting non-prescription medications or recreational substances. Performance and EEG measures were analyzed to derive an Overall score and three constituent sub-scores characterizing changes in performance, cortical activation, and alertness from each individual's baseline. Long-term and within-day variability were determined for each score; medication effects were assessed by reference to each individual's normal day-to-day variability. Over the several year period, the mean Overall score and sub-scores were approximately zero with standard deviations less than one. Overall scores were lower and their variability higher in afternoon relative to morning sessions. At the group level, alcohol, diphenhydramine and marijuana produced significant effects, but there were large individual differences. Objective working memory measures incorporating performance and EEG are stable over time and sensitive at the level of individual subjects to interventions that affect neurocognitive function. With further research these measures may be suitable for use in individualized medical care by providing a sensitive assessment of incipient illness and response to treatment. Published by Elsevier Ireland Ltd.

  16. On the numbers of images of two stochastic gravitational lensing models

    NASA Astrophysics Data System (ADS)

    Wei, Ang

    2017-02-01

    We study two gravitational lensing models with Gaussian randomness: the continuous mass fluctuation model and the floating black hole model. The lens equations of these models are related to certain random harmonic functions. Using Rice's formula and Gaussian techniques, we obtain the expected numbers of zeros of these functions, which indicate the amounts of images in the corresponding lens systems.

  17. Carbon films produced from ionic liquid carbon precursors

    DOEpatents

    Dai, Sheng; Luo, Huimin; Lee, Je Seung

    2013-11-05

    The invention is directed to a method for producing a film of porous carbon, the method comprising carbonizing a film of an ionic liquid, wherein the ionic liquid has the general formula (X.sup.+a).sub.x(Y.sup.-b).sub.y, wherein the variables a and b are, independently, non-zero integers, and the subscript variables x and y are, independently, non-zero integers, such that ax=by, and at least one of X.sup.+ and Y.sup.- possesses at least one carbon-nitrogen unsaturated bond. The invention is also directed to a composition comprising a porous carbon film possessing a nitrogen content of at least 10 atom %.

  18. Empirical variability in the calibration of slope-based eccentric photorefraction

    PubMed Central

    Bharadwaj, Shrikant R.; Sravani, N. Geetha; Little, Julie-Anne; Narasaiah, Asa; Wong, Vivian; Woodburn, Rachel; Candy, T. Rowan

    2014-01-01

    Refraction estimates from eccentric infrared (IR) photorefraction depend critically on the calibration of luminance slopes in the pupil. While the intersubject variability of this calibration has been estimated, there is no systematic evaluation of its intrasubject variability. This study determined the within subject inter- and intra-session repeatability of this calibration factor and the optimum range of lenses needed to derive this value. Relative calibrations for the MCS PowerRefractor and a customized photorefractor were estimated twice within one session or across two sessions by placing trial lenses before one eye covered with an IR transmitting filter. The data were subsequently resampled with various lens combinations to determine the impact of lens power range on the calibration estimates. Mean (±1.96 SD) calibration slopes were 0.99 ± 0.39 for North Americans with the MCS PowerRefractor (relative to its built-in value) and 0.65 ± 0.25 Ls/D and 0.40 ± 0.09 Ls/D for Indians and North Americans with the custom photorefractor, respectively. The ±95% limits of agreement of intrasubject variability ranged from ±0.39 to ±0.56 for the MCS PowerRefractor and ±0.03 Ls/D to ±0.04 Ls/D for the custom photorefractor. The mean differences within and across sessions were not significantly different from zero (p > 0.38 for all). The combined intersubject and intrasubject variability of calibration is therefore about ±40% of the mean value, implying that significant errors in individual refraction/accommodation estimates may arise if a group-average calibration is used. Protocols containing both plus and minus lenses had calibration slopes closest to the gold-standard protocol, suggesting that they may provide the best estimate of the calibration factor compared to those containing either plus or minus lenses. PMID:23695324

  19. Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows

    Treesearch

    Thomas B. Lynch; David Hamlin; Mark J. Ducey

    2016-01-01

    Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...

  20. A Demonstration using Low-kt Fatigue Specimens of a Method for Predicting the Fatigue Behaviour of Corroded Aircraft Components

    DTIC Science & Technology

    2013-03-01

    This third random variable, with some optimisation, means that the second model can predict the mean and scatter of the observed fatigue lives. KIDS...Barishpolsky [65] studied this effect using a FE model of ellipsoidal voids and cracked or decohered ellipsoidal inclusions in an elastic body . They...Specifically, the first strike is long and thin, the second is square and the third is short and wide. Five centroid positions (d = 0, 30, 38 and

Top