Sample records for underlying random variables

  1. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  2. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  3. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  4. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  5. Reliability analysis of structures under periodic proof tests in service

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.

  6. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  7. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  8. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  9. Ratio index variables or ANCOVA? Fisher's cats revisited.

    PubMed

    Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S

    2010-01-01

    Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.

  10. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  11. PROGRAMED LEARNING--A COMPARATIVE EVALUATION OF STUDENT PERFORMANCE VARIABLES UNDER COMBINATIONS OF CONVENTIONAL AND AUTOMATED INSTRUCTION.

    ERIC Educational Resources Information Center

    FLINT, LANNING L.; HATCH, RICHARD S.

    STUDENT PERFORMANCE VARIABLES UNDER AUTOMATED, CONVENTIONAL, AND A COMBINATION OF AUTOMATED AND CONVENTIONAL CONDITIONS OF INSTRUCTION WERE INVESTIGATED. RECOMMENDATIONS FOR THE INTEGRATION OF PROGRAMED MATERIAL INTO THE CLASSROOM WERE SOUGHT. THREE GROUPS OF JUNIOR COLLEGE STUDENTS WERE USED IN THE EXPERIMENT. THE GROUPS WERE CHOSEN AT RANDOM.…

  12. Epidemics in networks: a master equation approach

    NASA Astrophysics Data System (ADS)

    Cotacallapa, M.; Hase, M. O.

    2016-02-01

    A problem closely related to epidemiology, where a subgraph of ‘infected’ links is defined inside a larger network, is investigated. This subgraph is generated from the underlying network by a random variable, which decides whether a link is able to propagate a disease/information. The relaxation timescale of this random variable is examined in both annealed and quenched limits, and the effectiveness of propagation of disease/information is analyzed. The dynamics of the model is governed by a master equation and two types of underlying network are considered: one is scale-free and the other has exponential degree distribution. We have shown that the relaxation timescale of the contagion variable has a major influence on the topology of the subgraph of infected links, which determines the efficiency of spreading of disease/information over the network.

  13. A dynamic programming-based particle swarm optimization algorithm for an inventory management problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao

    2013-07-01

    This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.

  14. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  15. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  16. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  17. Quantifying networks complexity from information geometry viewpoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Domenico, E-mail: domenico.felice@unicam.it; Mancini, Stefano; INFN-Sezione di Perugia, Via A. Pascoli, I-06123 Perugia

    We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy onmore » simplexes and find that it monotonically increases with their dimension.« less

  18. Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport

    NASA Astrophysics Data System (ADS)

    Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike

    2017-04-01

    Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.

  19. The variability of software scoring of the CDMAM phantom associated with a limited number of images

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying J.; Van Metter, Richard

    2007-03-01

    Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.

  20. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  1. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  2. Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane

    NASA Technical Reports Server (NTRS)

    Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

    2000-01-01

    This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

  3. Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions

    PubMed Central

    König, Sandra; Schauer, Stefan

    2016-01-01

    Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572

  4. Reducing random measurement error in assessing postural load on the back in epidemiologic surveys.

    PubMed

    Burdorf, A

    1995-02-01

    The goal of this study was to design strategies to assess postural load on the back in occupational epidemiology by taking into account the reliability of measurement methods and the variability of exposure among the workers under study. Intermethod reliability studies were evaluated to estimate the systematic bias (accuracy) and random measurement error (precision) of various methods to assess postural load on the back. Intramethod reliability studies were reviewed to estimate random variability of back load over time. Intermethod surveys have shown that questionnaires have a moderate reliability for gross activities such as sitting, whereas duration of trunk flexion and rotation should be assessed by observation methods or inclinometers. Intramethod surveys indicate that exposure variability can markedly affect the reliability of estimates of back load if the estimates are based upon a single measurement over a certain time period. Equations have been presented to evaluate various study designs according to the reliability of the measurement method, the optimum allocation of the number of repeated measurements per subject, and the number of subjects in the study. Prior to a large epidemiologic study, an exposure-oriented survey should be conducted to evaluate the performance of measurement instruments and to estimate sources of variability for back load. The strategy for assessing back load can be optimized by balancing the number of workers under study and the number of repeated measurements per worker.

  5. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  6. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    PubMed

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  7. Attention Measures of Accuracy, Variability, and Fatigue Detect Early Response to Donepezil in Alzheimer's Disease: A Randomized, Double-blind, Placebo-Controlled Pilot Trial.

    PubMed

    Vila-Castelar, Clara; Ly, Jenny J; Kaplan, Lillian; Van Dyk, Kathleen; Berger, Jeffrey T; Macina, Lucy O; Stewart, Jennifer L; Foldi, Nancy S

    2018-04-09

    Donepezil is widely used to treat Alzheimer's disease (AD), but detecting early response remains challenging for clinicians. Acetylcholine is known to directly modulate attention, particularly under high cognitive conditions, but no studies to date test whether measures of attention under high load can detect early effects of donepezil. We hypothesized that load-dependent attention tasks are sensitive to short-term treatment effects of donepezil, while global and other domain-specific cognitive measures are not. This longitudinal, randomized, double-blind, placebo-controlled pilot trial (ClinicalTrials.gov Identifier: NCT03073876) evaluated 23 participants newly diagnosed with AD initiating de novo donepezil treatment (5 mg). After baseline assessment, participants were randomized into Drug (n = 12) or Placebo (n = 11) groups, and retested after approximately 6 weeks. Cognitive assessment included: (a) attention tasks (Foreperiod Effect, Attentional Blink, and Covert Orienting tasks) measuring processing speed, top-down accuracy, orienting, intra-individual variability, and fatigue; (b) global measures (Alzheimer's Disease Assessment Scale-Cognitive Subscale, Mini-Mental Status Examination, Dementia Rating Scale); and (c) domain-specific measures (memory, language, visuospatial, and executive function). The Drug but not the Placebo group showed benefits of treatment at high-load measures by preserving top-down accuracy, improving intra-individual variability, and averting fatigue. In contrast, other global or cognitive domain-specific measures could not detect treatment effects over the same treatment interval. The pilot-study suggests that attention measures targeting accuracy, variability, and fatigue under high-load conditions could be sensitive to short-term cholinergic treatment. Given the central role of acetylcholine in attentional function, load-dependent attentional measures may be valuable cognitive markers of early treatment response.

  8. Latent class instrumental variables: A clinical and biostatistical perspective

    PubMed Central

    Baker, Stuart G.; Kramer, Barnett S.; Lindeman, Karen S.

    2015-01-01

    In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. PMID:26239275

  9. Stability and dynamical properties of material flow systems on random networks

    NASA Astrophysics Data System (ADS)

    Anand, K.; Galla, T.

    2009-04-01

    The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.

  10. Probabilistic analysis of a materially nonlinear structure

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Wu, Y.-T.; Fossum, A. F.

    1990-01-01

    A probabilistic finite element program is used to perform probabilistic analysis of a materially nonlinear structure. The program used in this study is NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), under development at Southwest Research Institute. The cumulative distribution function (CDF) of the radial stress of a thick-walled cylinder under internal pressure is computed and compared with the analytical solution. In addition, sensitivity factors showing the relative importance of the input random variables are calculated. Significant plasticity is present in this problem and has a pronounced effect on the probabilistic results. The random input variables are the material yield stress and internal pressure with Weibull and normal distributions, respectively. The results verify the ability of NESSUS to compute the CDF and sensitivity factors of a materially nonlinear structure. In addition, the ability of the Advanced Mean Value (AMV) procedure to assess the probabilistic behavior of structures which exhibit a highly nonlinear response is shown. Thus, the AMV procedure can be applied with confidence to other structures which exhibit nonlinear behavior.

  11. Quiet Eye Duration Is Responsive to Variability of Practice and to the Axis of Target Changes

    ERIC Educational Resources Information Center

    Horn, Robert R.; Okumura, Michelle S.; Alexander, Melissa G. F.; Gardin, Fredrick A.; Sylvester, Curtis T.

    2012-01-01

    We tested the hypothesis that quiet eye, the final fixation before the initiation of a movement in aiming tasks, is used to scale the movement's parameters. Two groups of 12 participants (N = 24) threw darts to targets in the horizontal and vertical axes under conditions of higher (random) or lower (blocked) target variability. Supporting our…

  12. Rationales of a Shift towards Knowledge Economy in Jordan from the Viewpoint of Educational Experts and Relationship with Some Variables

    ERIC Educational Resources Information Center

    Al Zboon, Mohammad Saleem; Al Ahmad, Suliman Diab Ali; Al Zboon, Saleem Odeh

    2009-01-01

    The purpose of the present study was to identify rationales underlying a shift towards knowledge economy in education as perceived by the educational experts in Jordan and relationship with some variables. The random stratum sample (n = 90) consisted of educational experts representing faculty members in the Jordanian universities and top leaders…

  13. Two approximations of the present value distribution of a disability annuity

    NASA Astrophysics Data System (ADS)

    Spreeuw, Jaap

    2006-02-01

    The distribution function of the present value of a cash flow can be approximated by means of a distribution function of a random variable, which is also the present value of a sequence of payments, but with a simpler structure. The corresponding random variable has the same expectation as the random variable corresponding to the original distribution function and is a stochastic upper bound of convex order. A sharper upper bound can be obtained if more information about the risk is available. In this paper, it will be shown that such an approach can be adopted for disability annuities (also known as income protection policies) in a three state model under Markov assumptions. Benefits are payable during any spell of disability whilst premiums are only due whenever the insured is healthy. The quality of the two approximations is investigated by comparing the distributions obtained with the one derived from the algorithm presented in the paper by Hesselager and Norberg [Insurance Math. Econom. 18 (1996) 35-42].

  14. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. I: Spike Generating Models on Converging Motifs

    PubMed Central

    Bi, Zedong; Zhou, Changsong

    2016-01-01

    In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP) when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis). Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons). Neurons (including the post-synaptic neuron) in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV) induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV) induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1) synchronous firing and burstiness tend to increase DiffV, (2) heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3) heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our work important for understanding functional processes of neuronal networks (such as memory) and neural development. PMID:26941634

  15. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.

    PubMed

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.

  16. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much

    PubMed Central

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance. PMID:28344429

  17. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  18. European Randomized Study of Screening for Prostate Cancer Risk Calculator: External Validation, Variability, and Clinical Significance.

    PubMed

    Gómez-Gómez, Enrique; Carrasco-Valiente, Julia; Blanca-Pedregosa, Ana; Barco-Sánchez, Beatriz; Fernandez-Rueda, Jose Luis; Molina-Abril, Helena; Valero-Rosa, Jose; Font-Ugalde, Pilar; Requena-Tapia, Maria José

    2017-04-01

    To externally validate the European Randomized Study of Screening for Prostate Cancer (ERSPC) risk calculator (RC) and to evaluate its variability between 2 consecutive prostate-specific antigen (PSA) values. We prospectively catalogued 1021 consecutive patients before prostate biopsy for suspicion of prostate cancer (PCa). The risk of PCa and significant PCa (Gleason score ≥7) from 749 patients was calculated according to ERSPC-RC (digital rectal examination-based version 3 of 4) for 2 consecutive PSA tests per patient. The calculators' predictions were analyzed using calibration plots and the area under the receiver operating characteristic curve (area under the curve). Cohen kappa coefficient was used to compare the ability and variability. Of 749 patients, PCa was detected in 251 (33.5%) and significant PCa was detected in 133 (17.8%). Calibration plots showed an acceptable parallelism and similar discrimination ability for both PSA levels with an area under the curve of 0.69 for PCa and 0.74 for significant PCa. The ERSPC showed 226 (30.2%) unnecessary biopsies with the loss of 10 significant PCa. The variability of the RC was 16% for PCa and 20% for significant PCa, and a higher variability was associated with a reduced risk of significant PCa. We can conclude that the performance of the ERSPC-RC in the present cohort shows a high similitude between the 2 PSA levels; however, the RC variability value is associated with a decreased risk of significant PCa. The use of the ERSPC in our cohort detects a high number of unnecessary biopsies. Thus, the incorporation of ERSPC-RC could help the clinical decision to carry out a prostate biopsy. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Scaling Limit of Symmetric Random Walk in High-Contrast Periodic Environment

    NASA Astrophysics Data System (ADS)

    Piatnitski, A.; Zhizhina, E.

    2017-11-01

    The paper deals with the asymptotic properties of a symmetric random walk in a high contrast periodic medium in Z^d, d≥1. From the existing homogenization results it follows that under diffusive scaling the limit behaviour of this random walk need not be Markovian. The goal of this work is to show that if in addition to the coordinate of the random walk in Z^d we introduce an extra variable that characterizes the position of the random walk inside the period then the limit dynamics of this two-component process is Markov. We describe the limit process and observe that the components of the limit process are coupled. We also prove the convergence in the path space for the said random walk.

  20. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    PubMed

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  1. Latent class instrumental variables: a clinical and biostatistical perspective.

    PubMed

    Baker, Stuart G; Kramer, Barnett S; Lindeman, Karen S

    2016-01-15

    In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. Copyright © 2015 John Wiley & Sons, Ltd.

  2. An Undergraduate Research Experience on Studying Variable Stars

    NASA Astrophysics Data System (ADS)

    Amaral, A.; Percy, J. R.

    2016-06-01

    We describe and evaluate a summer undergraduate research project and experience by one of us (AA), under the supervision of the other (JP). The aim of the project was to sample current approaches to analyzing variable star data, and topics related to the study of Mira variable stars and their astrophysical importance. This project was done through the Summer Undergraduate Research Program (SURP) in astronomy at the University of Toronto. SURP allowed undergraduate students to explore and learn about many topics within astronomy and astrophysics, from instrumentation to cosmology. SURP introduced students to key skills which are essential for students hoping to pursue graduate studies in any scientific field. Variable stars proved to be an excellent topic for a research project. For beginners to independent research, it introduces key concepts in research such as critical thinking and problem solving, while illuminating previously learned topics in stellar physics. The focus of this summer project was to compare observations with structural and evolutionary models, including modelling the random walk behavior exhibited in the (O-C) diagrams of most Mira stars. We found that the random walk could be modelled by using random fluctuations of the period. This explanation agreed well with observations.

  3. Rupture Propagation for Stochastic Fault Models

    NASA Astrophysics Data System (ADS)

    Favreau, P.; Lavallee, D.; Archuleta, R.

    2003-12-01

    The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.

  4. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.

  5. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  7. Time-variant random interval natural frequency analysis of structures

    NASA Astrophysics Data System (ADS)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  8. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  9. Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading

    NASA Technical Reports Server (NTRS)

    Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.

    1998-01-01

    The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.

  10. Fatigue crack growth under variable amplitude loading

    NASA Astrophysics Data System (ADS)

    Sidawi, Jihad A.

    1994-09-01

    Fatigue crack growth tests were conducted on an Fe 510 E C-Mn steel and a submerged arc welded joint from the same material under constant, variable, and random loading amplitudes. Paris-Erdogan's crack growth rate law was tested for the evaluation of m and C using the stress intensity factor K, the J-integral, the effective stress intensity factor K(sub eff), and the root mean square stress intensity factor K(sub rms) fracture mechanics concepts. The effect of retardation and residual stresses resulting from welding was also considered. It was found that all concepts gave good life predictions in all cases.

  11. CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties

    DTIC Science & Technology

    2017-03-01

    inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse

  12. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  13. Randomized central limit theorems: A unified theory

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  14. Multilevel Modeling with Correlated Effects

    ERIC Educational Resources Information Center

    Kim, Jee-Seon; Frees, Edward W.

    2007-01-01

    When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…

  15. Impact of Health Research Systems on Under-5 Mortality Rate: A Trend Analysis.

    PubMed

    Yazdizadeh, Bahareh; Parsaeian, Mahboubeh; Majdzadeh, Reza; Nikooee, Sima

    2016-11-26

    Between 1990 and 2015, under-5 mortality rate (U5MR) declined by 53%, from an estimated rate of 91 deaths per 1000 live births to 43, globally. The aim of this study was to determine the share of health research systems in this decrease alongside other influential factors. We used random effect regression models including the 'random intercept' and 'random intercept and random slope' models to analyze the panel data from 1990 to 2010. We selected the countries with U5MRs falling between the first and third quartiles in 1990. We used both the total articles (TA) and the number of child-specific articles (CSA) as a proxy of the health research system. In order to account for the impact of other factors, measles vaccination coverage (MVC) (as a proxy of health system performance), gross domestic product (GDP), human development index (HDI), and corruption perception index (CPI) (as proxies of development), were embedded in the model. Among all the models, 'the random intercept and random slope models' had lower residuals. The same variables of CSA, HDI, and time were significant and the coefficient of CSA was estimated at -0.17; meaning, with the addition of every 100 CSA, the rate of U5MR decreased by 17 per 1000 live births. Although the number of CSA has contributed to the reduction of U5MR, the amount of its contribution is negligible compared to the countries' development. We recommend entering different types of researches into the model separately in future research and including the variable of 'exchange between knowledge generator and user.' © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  16. Use of allele scores as instrumental variables for Mendelian randomization

    PubMed Central

    Burgess, Stephen; Thompson, Simon G

    2013-01-01

    Background An allele score is a single variable summarizing multiple genetic variants associated with a risk factor. It is calculated as the total number of risk factor-increasing alleles for an individual (unweighted score), or the sum of weights for each allele corresponding to estimated genetic effect sizes (weighted score). An allele score can be used in a Mendelian randomization analysis to estimate the causal effect of the risk factor on an outcome. Methods Data were simulated to investigate the use of allele scores in Mendelian randomization where conventional instrumental variable techniques using multiple genetic variants demonstrate ‘weak instrument’ bias. The robustness of estimates using the allele score to misspecification (for example non-linearity, effect modification) and to violations of the instrumental variable assumptions was assessed. Results Causal estimates using a correctly specified allele score were unbiased with appropriate coverage levels. The estimates were generally robust to misspecification of the allele score, but not to instrumental variable violations, even if the majority of variants in the allele score were valid instruments. Using a weighted rather than an unweighted allele score increased power, but the increase was small when genetic variants had similar effect sizes. Naive use of the data under analysis to choose which variants to include in an allele score, or for deriving weights, resulted in substantial biases. Conclusions Allele scores enable valid causal estimates with large numbers of genetic variants. The stringency of criteria for genetic variants in Mendelian randomization should be maintained for all variants in an allele score. PMID:24062299

  17. Effect of music-based multitask training on gait, balance, and fall risk in elderly people: a randomized controlled trial.

    PubMed

    Trombetti, Andrea; Hars, Mélany; Herrmann, François R; Kressig, Reto W; Ferrari, Serge; Rizzoli, René

    2011-03-28

    Falls occur mainly while walking or performing concurrent tasks. We determined whether a music-based multitask exercise program improves gait and balance and reduces fall risk in elderly individuals. We conducted a 12-month randomized controlled trial involving 134 community-dwelling individuals older than 65 years, who are at increased risk of falling. They were randomly assigned to an intervention group (n = 66) or a delayed intervention control group scheduled to start the program 6 months later (n = 68). The intervention was a 6-month multitask exercise program performed to the rhythm of piano music. Change in gait variability under dual-task condition from baseline to 6 months was the primary end point. Secondary outcomes included changes in balance, functional performances, and fall risk. At 6 months, there was a reduction in stride length variability (adjusted mean difference, -1.4%; P < .002) under dual-task condition in the intervention group, compared with the delayed intervention control group. Balance and functional tests improved compared with the control group. There were fewer falls in the intervention group (incidence rate ratio, 0.46; 95% confidence interval, 0.27-0.79) and a lower risk of falling (relative risk, 0.61; 95% confidence interval, 0.39-0.96). Similar changes occurred in the delayed intervention control group during the second 6-month period with intervention. The benefit of the intervention on gait variability persisted 6 months later. In community-dwelling older people at increased risk of falling, a 6-month music-based multitask exercise program improved gait under dual-task condition, improved balance, and reduced both the rate of falls and the risk of falling. Trial Registration clinicaltrials.gov Identifier: NCT01107288.

  18. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  19. A Demonstration Sample for Poetry Education: Poem under the Light of "Poetics of the Open Work"

    ERIC Educational Resources Information Center

    Afacan, Aydin

    2016-01-01

    The aim of this study is to provide a demonstration sample for the high school stage under the light of "Poetics of the Open Work" that is considered as a step towards comprehending the qualified poem. In this study, has been built in single group pretest-posttest design. Independent variables are applied to a randomly selected group to…

  20. Non-stationarities in the relationships of heavy precipitation events in the Mediterranean area and the large-scale circulation in the second half of the 20th century

    NASA Astrophysics Data System (ADS)

    Merkenschlager, Christian; Hertig, Elke; Jacobeit, Jucundus

    2017-04-01

    In the context of analyzing temporal varying relationships of heavy precipitation events in the Mediterranean area and associated anomalies of the large-scale circulation, quantile regression models were established. The models were calibrated using different circulation and thermodynamic variables at the 700 hPa and 850 hPa levels as predictors as well as daily precipitation time series at different stations in the Mediterranean area as predictand. Analyses were done for the second half of the 20th century. In the scope of assessing non-stationarities in the predictor-predictand relationships the time series were divided into calibration and validation periods. 100 randomized subsamples were used to calibrate/validate the models under stationary conditions. The highest and lowest skill score of the 100 random samples was used to determine the range of random variability. The model performance under non-stationary conditions was derived from the skill scores of cross-validated running subintervals. If the skill scores of several consecutive years are outside the range of random variability a non-stationarity was declaimed. Particularly the Iberian Peninsula and the Levant region were affected by non-stationarities, the former with significant positive deviations of the skill scores, the latter with significant negative deviations. By means of a case study for the Levant region we determined three possible reasons for non-stationary behavior in the predictor-predictand relationships. The Mediterranean Oscillation as a superordinate system affects the cyclone activity in the Mediterranean basin and the location and intensity of the Cyprus low. Overall, it is demonstrated that non-stationarities have to be taken into account within statistical downscaling model development.

  1. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  2. Anomalous diffusion analysis of the lifting events in the event-chain Monte Carlo for the classical XY models

    NASA Astrophysics Data System (ADS)

    Kimura, Kenji; Higuchi, Saburo

    2017-11-01

    We introduce a novel random walk model that emerges in the event-chain Monte Carlo (ECMC) of spin systems. In the ECMC, the lifting variable specifying the spin to be updated changes its value to one of its interacting neighbor spins. This movement can be regarded as a random walk in a random environment with a feedback. We investigate this random walk numerically in the case of the classical XY model in 1, 2, and 3 dimensions to find that it is superdiffusive near the critical point of the underlying spin system. It is suggested that the performance improvement of the ECMC is related to this anomalous behavior.

  3. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models.

    PubMed

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M

    2017-12-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.

  4. [The reentrant binomial model of nuclear anomalies growth in rhabdomyosarcoma RA-23 cell populations under increasing doze of rare ionizing radiation].

    PubMed

    Alekseeva, N P; Alekseev, A O; Vakhtin, Iu B; Kravtsov, V Iu; Kuzovatov, S N; Skorikova, T I

    2008-01-01

    Distributions of nuclear morphology anomalies in transplantable rabdomiosarcoma RA-23 cell populations were investigated under effect of ionizing radiation from 0 to 45 Gy. Internuclear bridges, nuclear protrusions and dumbbell-shaped nuclei were accepted for morphological anomalies. Empirical distributions of the number of anomalies per 100 nuclei were used. The adequate model of reentrant binomial distribution has been found. The sum of binomial random variables with binomial number of summands has such distribution. Averages of these random variables were named, accordingly, internal and external average reentrant components. Their maximum likelihood estimations were received. Statistical properties of these estimations were investigated by means of statistical modeling. It has been received that at equally significant correlation between the radiation dose and the average of nuclear anomalies in cell populations after two-three cellular cycles from the moment of irradiation in vivo the irradiation doze significantly correlates with internal average reentrant component, and in remote descendants of cell transplants irradiated in vitro - with external one.

  5. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  6. Retrocausation Or Extant Indefinite Reality?

    NASA Astrophysics Data System (ADS)

    Houtkooper, Joop M.

    2006-10-01

    The possibility of retrocausation has been considered to explain the occurrence of anomalous phenomena in which the ostensible effects are preceded by their causes. A scrutiny of both experimental methodology and the experimental data is called for. A review of experimental data reveals the existence of such effects to be a serious possibility. The experimental methodology entails some conceptual difficulties, these depending on the underlying assumptions about the effects. A major point is an ambiguity between anomalous acquisition of information and retrocausation in exerted influences. A unifying theory has been proposed, based upon the fundamental randomness of quantum mechanics. Quantum mechanical randomness may be regarded as a tenacious phenomenon, that apparently is only resolved by the human observer of the random variable in question. This has led to the "observational theory" of anomalous phenomena, which is based upon the assumption that the preference of a motivated observer is able to interact with the extant indefinite random variable that is being observed. This observational theory has led to a novel prediction, which has been corroborated in experiments. Moreover, different classes of anomalous phenomena can be explained by the same basic mechanism. This foregoes retroactive causation, but, instead, requires that macroscopic physical variables remain in a state of indefinite reality and thus remain influenceable by mental efforts until these are observed. More work is needed to discover the relevant psychological and neurophysiological variables involved in effective motivated observation. Besides these practicalities, the fundamentals still have some interesting loose ends.

  7. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  8. A random walk model for evaluating clinical trials involving serial observations.

    PubMed

    Hopper, J L; Young, G P

    1988-05-01

    For clinical trials where the variable of interest is ordered and categorical (for example, disease severity, symptom scale), and where measurements are taken at intervals, it might be possible to achieve a greater discrimination between the efficacy of treatments by modelling each patient's progress as a stochastic process. The random walk is a simple, easily interpreted model that can be fitted by maximum likelihood using a maximization routine with inference based on standard likelihood theory. In general the model can allow for randomly censored data, incorporates measured prognostic factors, and inference is conditional on the (possibly non-random) allocation of patients. Tests of fit and of model assumptions are proposed, and application to two therapeutic trials of gastroenterological disorders are presented. The model gave measures of the rate of, and variability in, improvement for patients under different treatments. A small simulation study suggested that the model is more powerful than considering the difference between initial and final scores, even when applied to data generated by a mechanism other than the random walk model assumed in the analysis. It thus provides a useful additional statistical method for evaluating clinical trials.

  9. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.

    PubMed

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2013-02-01

    In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.

  10. Uncertainty in Random Forests: What does it mean in a spatial context?

    NASA Astrophysics Data System (ADS)

    Klump, Jens; Fouedjio, Francky

    2017-04-01

    Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial correlation. However, in cases of weak spatial correlation Random Forest, as a nonparametric method, may give the better results once we have a better understanding of the meaning of its uncertainty measures in a spatial context. References [1] Kirkwood, C., M. Cave, D. Beamish, S. Grebby, and A. Ferreira (2016), A machine learning approach to geochemical mapping, Journal of Geochemical Exploration, 163, 28-40, doi:10.1016/j.gexplo.2016.05.003.

  11. Entropy of spatial network ensembles

    NASA Astrophysics Data System (ADS)

    Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis

    2018-04-01

    We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.

  12. Low-contrast lesion detection in tomosynthetic breast imaging using a realistic breast phantom

    NASA Astrophysics Data System (ADS)

    Zhou, Lili; Oldan, Jorge; Fisher, Paul; Gindi, Gene

    2006-03-01

    Tomosynthesis mammography is a potentially valuable technique for detection of breast cancer. In this simulation study, we investigate the efficacy of three different tomographic reconstruction methods, EM, SART and Backprojection, in the context of an especially difficult mammographic detection task. The task is the detection of a very low-contrast mass embedded in very dense fibro-glandular tissue - a clinically useful task for which tomosynthesis may be well suited. The project uses an anatomically realistic 3D digital breast phantom whose normal anatomic variability limits lesion conspicuity. In order to capture anatomical object variability, we generate an ensemble of phantoms, each of which comprises random instances of various breast structures. We construct medium-sized 3D breast phantoms which model random instances of ductal structures, fibrous connective tissue, Cooper's ligaments and power law structural noise for small scale object variability. Random instances of 7-8 mm irregular masses are generated by a 3D random walk algorithm and placed in very dense fibro-glandular tissue. Several other components of the breast phantom are held fixed, i.e. not randomly generated. These include the fixed breast shape and size, nipple structure, fixed lesion location, and a pectoralis muscle. We collect low-dose data using an isocentric tomosynthetic geometry at 11 angles over 50 degrees and add Poisson noise. The data is reconstructed using the three algorithms. Reconstructed slices through the center of the lesion are presented to human observers in a 2AFC (two-alternative-forced-choice) test that measures detectability by computing AUC (area under the ROC curve). The data collected in each simulation includes two sources of variability, that due to the anatomical variability of the phantom and that due to the Poisson data noise. We found that for this difficult task that the AUC value for EM (0.89) was greater than that for SART (0.83) and Backprojection (0.66).

  13. Transcription, intercellular variability and correlated random walk.

    PubMed

    Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar

    2008-11-01

    We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.

  14. Missing data and multiple imputation in clinical epidemiological research.

    PubMed

    Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene

    2017-01-01

    Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data.

  15. Missing data and multiple imputation in clinical epidemiological research

    PubMed Central

    Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene

    2017-01-01

    Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data. PMID:28352203

  16. Fatigue Tests with Random Flight Simulation Loading

    NASA Technical Reports Server (NTRS)

    Schijve, J.

    1972-01-01

    Crack propagation was studied in a full-scale wing structure under different simulated flight conditions. Omission of low-amplitude gust cycles had a small effect on the crack rate. Truncation of the infrequently occurring high-amplitude gust cycles to a lower level had a noticeably accelerating effect on crack growth. The application of fail-safe load (100 percent limit load) effectively stopped subsequent crack growth under resumed flight-simulation loading. In another flight-simulation test series on sheet specimens, the variables studied are the design stress level and the cyclic frequency of the random gust loading. Inflight mean stresses vary from 5.5 to 10.0 kg/sq mm. The effect of the stress level is larger for the 2024 alloy than for the 7075 alloy. Three frequencies were employed: namely, 10 cps, 1 cps, and 0.1 cps. The frequency effect was small. The advantages and limitations of flight-simulation tests are compared with those of alternative test procedures such as constant-amplitude tests, program tests, and random-load tests. Various testing purposes are considered. The variables of flight-simulation tests are listed and their effects are discussed. A proposal is made for performing systematic flight-simulation tests in such a way that the compiled data may be used as a source of reference.

  17. The competing risks Cox model with auxiliary case covariates under weaker missing-at-random cause of failure.

    PubMed

    Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin

    2017-08-04

    In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.

  18. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  19. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols.

    PubMed

    Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N

    2018-05-01

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.

  20. Experimental Investigations of Non-Stationary Properties In Radiometer Receivers Using Measurements of Multiple Calibration References

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)

    2002-01-01

    Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.

  1. Reinforcement Learning Trees

    PubMed Central

    Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.

    2015-01-01

    In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687

  2. The behaviour of random forest permutation-based variable importance measures under predictor correlation.

    PubMed

    Nicodemus, Kristin K; Malley, James D; Strobl, Carolin; Ziegler, Andreas

    2010-02-27

    Random forests (RF) have been increasingly used in applications such as genome-wide association and microarray studies where predictor correlation is frequently observed. Recent works on permutation-based variable importance measures (VIMs) used in RF have come to apparently contradictory conclusions. We present an extended simulation study to synthesize results. In the case when both predictor correlation was present and predictors were associated with the outcome (HA), the unconditional RF VIM attributed a higher share of importance to correlated predictors, while under the null hypothesis that no predictors are associated with the outcome (H0) the unconditional RF VIM was unbiased. Conditional VIMs showed a decrease in VIM values for correlated predictors versus the unconditional VIMs under HA and was unbiased under H0. Scaled VIMs were clearly biased under HA and H0. Unconditional unscaled VIMs are a computationally tractable choice for large datasets and are unbiased under the null hypothesis. Whether the observed increased VIMs for correlated predictors may be considered a "bias" - because they do not directly reflect the coefficients in the generating model - or if it is a beneficial attribute of these VIMs is dependent on the application. For example, in genetic association studies, where correlation between markers may help to localize the functionally relevant variant, the increased importance of correlated predictors may be an advantage. On the other hand, we show examples where this increased importance may result in spurious signals.

  3. White Matter Microstructural Correlates of Superior Long-term Skill Gained Implicitly under Randomized Practice

    PubMed Central

    Song, Sunbin; Sharma, Nikhil; Buch, Ethan R.

    2012-01-01

    We value skills we have learned intentionally, but equally important are skills acquired incidentally without ability to describe how or what is learned, referred to as implicit. Randomized practice schedules are superior to grouped schedules for long-term skill gained intentionally, but its relevance for implicit learning is not known. In a parallel design, we studied healthy subjects who learned a motor sequence implicitly under randomized or grouped practice schedule and obtained diffusion-weighted images to identify white matter microstructural correlates of long-term skill. Randomized practice led to superior long-term skill compared with grouped practice. Whole-brain analyses relating interindividual variability in fractional anisotropy (FA) to long-term skill demonstrated that 1) skill in randomized learners correlated with FA within the corticostriatal tract connecting left sensorimotor cortex to posterior putamen, while 2) skill in grouped learners correlated with FA within the right forceps minor connecting homologous regions of the prefrontal cortex (PFC) and the corticostriatal tract connecting lateral PFC to anterior putamen. These results demonstrate first that randomized practice schedules improve long-term implicit skill more than grouped practice schedules and, second, that the superior skill acquired through randomized practice can be related to white matter microstructure in the sensorimotor corticostriatal network. PMID:21914632

  4. A method for minimum risk portfolio optimization under hybrid uncertainty

    NASA Astrophysics Data System (ADS)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  5. Night-to-Night Sleep Variability in Older Adults With Chronic Insomnia: Mediators and Moderators in a Randomized Controlled Trial of Brief Behavioral Therapy (BBT-I)

    PubMed Central

    Chan, Wai Sze; Williams, Jacob; Dautovich, Natalie D.; McNamara, Joseph P.H.; Stripling, Ashley; Dzierzewski, Joseph M.; Berry, Richard B.; McCoy, Karin J.M.; McCrae, Christina S.

    2017-01-01

    Study Objectives: Sleep variability is a clinically significant variable in understanding and treating insomnia in older adults. The current study examined changes in sleep variability in the course of brief behavioral therapy for insomnia (BBT-I) in older adults who had chronic insomnia. Additionally, the current study examined the mediating mechanisms underlying reductions of sleep variability and the moderating effects of baseline sleep variability on treatment responsiveness. Methods: Sixty-two elderly participants were randomly assigned to either BBT-I or self-monitoring and attention control (SMAC). Sleep was assessed by sleep diaries and actigraphy from baseline to posttreatment and at 3-month follow-up. Mixed models were used to examine changes in sleep variability (within-person standard deviations of weekly sleep parameters) and the hypothesized mediation and moderation effects. Results: Variabilities in sleep diary-assessed sleep onset latency (SOL) and actigraphy-assessed total sleep time (TST) significantly decreased in BBT-I compared to SMAC (Pseudo R2 = .12, .27; P = .018, .008). These effects were mediated by reductions in bedtime and wake time variability and time in bed. Significant time × group × baseline sleep variability interactions on sleep outcomes indicated that participants who had higher baseline sleep variability were more responsive to BBT-I; their actigraphy-assessed TST, SOL, and sleep efficiency improved to a greater degree (Pseudo R2 = .15 to .66; P < .001 to .044). Conclusions: BBT-I is effective in reducing sleep variability in older adults who have chronic insomnia. Increased consistency in bedtime and wake time and decreased time in bed mediate reductions of sleep variability. Baseline sleep variability may serve as a marker of high treatment responsiveness to BBT-I. Clinical Trial Registration: ClinicalTrials.gov, Identifier: NCT02967185 Citation: Chan WS, Williams J, Dautovich ND, McNamara JP, Stripling A, Dzierzewski JM, Berry RB, McCoy KJ, McCrae CS. Night-to-night sleep variability in older adults with chronic insomnia: mediators and moderators in a randomized controlled trial of brief behavioral therapy (BBT-I). J Clin Sleep Med. 2017;13(11):1243–1254. PMID:28992829

  6. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    PubMed

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks

    PubMed Central

    Bi, Zedong; Zhou, Changsong

    2016-01-01

    Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations) influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP) and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded), by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF) neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy). PMID:27555816

  8. A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco

    2016-04-01

    We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.

  9. Reliability of performance velocity for jump squats under feedback and nonfeedback conditions.

    PubMed

    Randell, Aaron D; Cronin, John B; Keogh, Justin Wl; Gill, Nicholas D; Pedersen, Murray C

    2011-12-01

    Randell, AD, Cronin, JB, Keogh, JWL, Gill, ND, and Pedersen, MC. Reliability of performance velocity for jump squats under feedback and nonfeedback conditions. J Strength Cond Res 25(12): 3514-3518, 2011-Advancements in the monitoring of kinematic and kinetic variables during resistance training have resulted in the ability to continuously monitor performance and provide feedback during training. If equipment and software can provide reliable instantaneous feedback related to the variable of interest during training, it is thought that this may result in goal-oriented movement tasks that increase the likelihood of transference to on-field performance or at the very least improve the mechanical variable of interest. The purpose of this study was to determine the reliability of performance velocity for jump squats under feedback and nonfeedback conditions over 3 consecutive training sessions. Twenty subjects were randomly allocated to a feedback or nonfeedback group, and each group performed a total of 3 "jump squat" training sessions with the velocity of each repetition measured using a linear position transducer. There was less change in mean velocities between sessions 1-2 and sessions 2-3 (0.07 and 0.02 vs. 0.13 and -0.04 m·s), less random variation (TE = 0.06 and 0.06 vs. 0.10 and 0.07 m·s) and greater consistency (intraclass correlation coefficient = 0.83 and 0.87 vs. 0.53 and 0.74) between sessions for the feedback condition as compared to the nonfeedback condition. It was concluded that there is approximately a 50-50 probability that the provision of feedback was beneficial to the performance in the squat jump over multiple sessions. It is suggested that this has the potential for increasing transference to on-field performance or at the very least improving the mechanical variable of interest.

  10. Evidence of Deterministic Components in the Apparent Randomness of GRBs: Clues of a Chaotic Dynamic

    PubMed Central

    Greco, G.; Rosa, R.; Beskin, G.; Karpov, S.; Romano, L.; Guarnieri, A.; Bartolini, C.; Bedogni, R.

    2011-01-01

    Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short – as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration – seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole. PMID:22355609

  11. Evidence of deterministic components in the apparent randomness of GRBs: clues of a chaotic dynamic.

    PubMed

    Greco, G; Rosa, R; Beskin, G; Karpov, S; Romano, L; Guarnieri, A; Bartolini, C; Bedogni, R

    2011-01-01

    Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short - as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration - seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole.

  12. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Raudenbush, Stephen W.

    2013-01-01

    The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…

  13. On the null distribution of Bayes factors in linear regression

    USDA-ARS?s Scientific Manuscript database

    We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...

  14. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  15. Performance of friction dampersin geometric mistuned bladed disk assembly subjected to random excitations

    NASA Astrophysics Data System (ADS)

    Cha, Douksoon

    2018-07-01

    In this study, the performance of friction dampers of a geometric mistuned bladed disk assembly is examined under random excitations. The results are represented by non-dimensional variables. It is shown that the performance of the blade-to-blade damper can deteriorate when the correlated narrow band excitations have a dominant frequency near the 1st natural frequency of the bladed disk assembly. Based on a simple model of a geometric mistuned bladed disk assembly, the analytical technique shows an efficient way to design friction dampers.

  16. Purposeful Variable Selection and Stratification to Impute Missing FAST Data in Trauma Research

    PubMed Central

    Fuchs, Paul A.; del Junco, Deborah J.; Fox, Erin E.; Holcomb, John B.; Rahbar, Mohammad H.; Wade, Charles A.; Alarcon, Louis H.; Brasel, Karen J.; Bulger, Eileen M.; Cohen, Mitchell J.; Myers, John G.; Muskat, Peter; Phelan, Herb A.; Schreiber, Martin A.; Cotton, Bryan A.

    2013-01-01

    Background The Focused Assessment with Sonography for Trauma (FAST) exam is an important variable in many retrospective trauma studies. The purpose of this study was to devise an imputation method to overcome missing data for the FAST exam. Due to variability in patients’ injuries and trauma care, these data are unlikely to be missing completely at random (MCAR), raising concern for validity when analyses exclude patients with missing values. Methods Imputation was conducted under a less restrictive, more plausible missing at random (MAR) assumption. Patients with missing FAST exams had available data on alternate, clinically relevant elements that were strongly associated with FAST results in complete cases, especially when considered jointly. Subjects with missing data (32.7%) were divided into eight mutually exclusive groups based on selected variables that both described the injury and were associated with missing FAST values. Additional variables were selected within each group to classify missing FAST values as positive or negative, and correct FAST exam classification based on these variables was determined for patients with non-missing FAST values. Results Severe head/neck injury (odds ratio, OR=2.04), severe extremity injury (OR=4.03), severe abdominal injury (OR=1.94), no injury (OR=1.94), other abdominal injury (OR=0.47), other head/neck injury (OR=0.57) and other extremity injury (OR=0.45) groups had significant ORs for missing data; the other group odds ratio was not significant (OR=0.84). All 407 missing FAST values were imputed, with 109 classified as positive. Correct classification of non-missing FAST results using the alternate variables was 87.2%. Conclusions Purposeful imputation for missing FAST exams based on interactions among selected variables assessed by simple stratification may be a useful adjunct to sensitivity analysis in the evaluation of imputation strategies under different missing data mechanisms. This approach has the potential for widespread application in clinical and translational research and validation is warranted. Level of Evidence Level II Prognostic or Epidemiological PMID:23778515

  17. Exact Markov chains versus diffusion theory for haploid random mating.

    PubMed

    Tyvand, Peder A; Thorvaldsen, Steinar

    2010-05-01

    Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.

  18. The influence of water versus carbohydrate-electrolyte hydration on blood components during a 16-km military march.

    PubMed

    de Carvalho, Moisés Vieira; Marins, João Carlos Bouzas; Silami-Garcia, Emerson

    2007-01-01

    The purpose of this study was to identify and to compare the effects of ingesting liquids during a 16-km military march under moderate environmental conditions. Twenty-six volunteer male subjects were randomly divided into two groups. Group GW received water (n=12), and group GP received an electrolytic carbohydrate solution (n=14). Blood and urine samples were obtained immediately before and after the march. No significant differences between the drinks were found for any of the measured variables. However, important results (p < 0.05) were observed by comparing variables before and after exercise. The variables included sodium, hematocrit, red blood cell, hemoglobin, and lactic acid levels and body weight (group GW) and sodium, potassium, hematocrit, red blood cell, hemoglobin, and lactic acid levels (group GP). Under the environmental conditions and hydration procedures applied, the results of this study showed similarities in the behavior of the variables, regardless of the kind of beverage consumed.

  19. Resolving the Conflict Between Associative Overdominance and Background Selection

    PubMed Central

    Zhao, Lei; Charlesworth, Brian

    2016-01-01

    In small populations, genetic linkage between a polymorphic neutral locus and loci subject to selection, either against partially recessive mutations or in favor of heterozygotes, may result in an apparent selective advantage to heterozygotes at the neutral locus (associative overdominance) and a retardation of the rate of loss of variability by genetic drift at this locus. In large populations, selection against deleterious mutations has previously been shown to reduce variability at linked neutral loci (background selection). We describe analytical, numerical, and simulation studies that shed light on the conditions under which retardation vs. acceleration of loss of variability occurs at a neutral locus linked to a locus under selection. We consider a finite, randomly mating population initiated from an infinite population in equilibrium at a locus under selection. With mutation and selection, retardation occurs only when S, the product of twice the effective population size and the selection coefficient, is of order 1. With S >> 1, background selection always causes an acceleration of loss of variability. Apparent heterozygote advantage at the neutral locus is, however, always observed when mutations are partially recessive, even if there is an accelerated rate of loss of variability. With heterozygote advantage at the selected locus, loss of variability is nearly always retarded. The results shed light on experiments on the loss of variability at marker loci in laboratory populations and on the results of computer simulations of the effects of multiple selected loci on neutral variability. PMID:27182952

  20. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  1. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  2. Performance of time-varying predictors in multilevel models under an assumption of fixed or random effects.

    PubMed

    Baird, Rachel; Maxwell, Scott E

    2016-06-01

    Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Quantum probabilistic logic programming

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  4. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  5. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  6. Leveraging prognostic baseline variables to gain precision in randomized trials

    PubMed Central

    Colantuoni, Elizabeth; Rosenblum, Michael

    2015-01-01

    We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751

  7. Evaluation of variable selection methods for random forests and omics data sets.

    PubMed

    Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke

    2017-10-16

    Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.

  8. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    ERIC Educational Resources Information Center

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  9. The Impact of Eliminating Extraneous Sound and Light on Students' Achievement: An Empirical Study

    ERIC Educational Resources Information Center

    Mangipudy, Rajarajeswari

    2010-01-01

    The impact of eliminating extraneous sound and light on students' achievement was investigated under four conditions: Light and Sound controlled, Sound Only controlled, Light Only controlled and neither Light nor Sound controlled. Group, age and gender were the control variables. Four randomly selected groups of high school freshmen students with…

  10. Hot-spot model for accretion disc variability as random process. II. Mathematics of the power-spectrum break frequency

    NASA Astrophysics Data System (ADS)

    Pecháček, T.; Goosmann, R. W.; Karas, V.; Czerny, B.; Dovčiak, M.

    2013-08-01

    Context. We study some general properties of accretion disc variability in the context of stationary random processes. In particular, we are interested in mathematical constraints that can be imposed on the functional form of the Fourier power-spectrum density (PSD) that exhibits a multiply broken shape and several local maxima. Aims: We develop a methodology for determining the regions of the model parameter space that can in principle reproduce a PSD shape with a given number and position of local peaks and breaks of the PSD slope. Given the vast space of possible parameters, it is an important requirement that the method is fast in estimating the PSD shape for a given parameter set of the model. Methods: We generated and discuss the theoretical PSD profiles of a shot-noise-type random process with exponentially decaying flares. Then we determined conditions under which one, two, or more breaks or local maxima occur in the PSD. We calculated positions of these features and determined the changing slope of the model PSD. Furthermore, we considered the influence of the modulation by the orbital motion for a variability pattern assumed to result from an orbiting-spot model. Results: We suggest that our general methodology can be useful for describing non-monotonic PSD profiles (such as the trend seen, on different scales, in exemplary cases of the high-mass X-ray binary Cygnus X-1 and the narrow-line Seyfert galaxy Ark 564). We adopt a model where these power spectra are reproduced as a superposition of several Lorentzians with varying amplitudes in the X-ray-band light curve. Our general approach can help in constraining the model parameters and in determining which parts of the parameter space are accessible under various circumstances.

  11. The effect of newly induced mutations on the fitness of genotypes and populations of yeast (Saccharomyces cerevisiae).

    PubMed

    Orthen, E; Lange, P; Wöhrmann, K

    1984-12-01

    This paper analyses the fate of artificially induced mutations and their importance to the fitness of populations of the yeast, Saccharomyces cerevisiae, an increasingly important model organism in population genetics. Diploid strains, treated with UV and EMS, were cultured asexually for approximately 540 generations and under conditions where the asexual growth was interrupted by a sexual phase. Growth rates of 100 randomly sampled diploid clones were estimated at the beginning and at the end of the experiment. After the induction of sporulation the growth rates of 100 randomly sampled spores were measured. UV and EMS treatment decreases the average growth rate of the clones significantly but increases the variability in comparison to the untreated control. After selection over approximately 540 generations, variability in growth rates was reduced to that of the untreated control. No increase in mean population fitness was observed. However, the results show that after selection there still exists a large amount of hidden genetic variability in the populations which is revealed when the clones are cultivated in environments other than those in which selection took place. A sexual phase increased the reduction of the induced variability.

  12. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  13. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    PubMed

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  14. Generation of dense plume fingers in saturated-unsaturated homogeneous porous media

    NASA Astrophysics Data System (ADS)

    Cremer, Clemens J. M.; Graf, Thomas

    2015-02-01

    Flow under variable-density conditions is widespread, occurring in geothermal reservoirs, at waste disposal sites or due to saltwater intrusion. The migration of dense plumes typically results in the formation of vertical plume fingers which are known to be triggered by material heterogeneity or by variations in source concentration that causes the density variation. Using a numerical groundwater model, six perturbation methods are tested under saturated and unsaturated flow conditions to mimic heterogeneity and concentration variations on the pore scale in order to realistically generate dense fingers. A laboratory-scale sand tank experiment is numerically simulated, and the perturbation methods are evaluated by comparing plume fingers obtained from the laboratory experiment with numerically simulated fingers. Dense plume fingering for saturated flow can best be reproduced with a spatially random, time-constant perturbation of the solute source. For unsaturated flow, a spatially and temporally random noise of solute concentration or a random conductivity field adequately simulate plume fingering.

  15. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  16. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  17. Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.

    2008-11-06

    This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less

  18. Effect of Arrangement of Stick Figures on Estimates of Proportion in Risk Graphics

    PubMed Central

    Ancker, Jessica S.; Weber, Elke U.; Kukafka, Rita

    2017-01-01

    Background Health risks are sometimes illustrated with stick figures, with a certain proportion colored to indicate they are affected by the disease. Perception of these graphics may be affected by whether the affected stick figures are scattered randomly throughout the group or arranged in a block. Objective To assess the effects of stick-figure arrangement on first impressions of estimates of proportion, under a 10-s deadline. Design Questionnaire. Participants and Setting Respondents recruited online (n = 100) or in waiting rooms at an urban hospital (n = 65). Intervention Participants were asked to estimate the proportion represented in 6 unlabeled graphics, half randomly arranged and half sequentially arranged. Measurements Estimated proportions. Results Although average estimates were fairly good, the variability of estimates was high. Overestimates of random graphics were larger than overestimates of sequential ones, except when the proportion was near 50%; variability was also higher with random graphics. Although the average inaccuracy was modest, it was large enough that more than one quarter of respondents confused 2 graphics depicting proportions that differed by 11 percentage points. Low numeracy and educational level were associated with inaccuracy. Limitations Participants estimated proportions but did not report perceived risk. Conclusions Randomly arranged arrays of stick figures should be used with care because viewers’ ability to estimate the proportion in these graphics is so poor that moderate differences between risks may not be visible. In addition, random arrangements may create an initial impression that proportions, especially large ones, are larger than they are. PMID:20671209

  19. Solute Concentration at a Pumping Well in Non-Gaussian Random Aquifers under Time-Varying Operational Schedules

    NASA Astrophysics Data System (ADS)

    Libera, A.; de Barros, F.; Riva, M.; Guadagnini, A.

    2016-12-01

    Managing contaminated groundwater systems is an arduous task for multiple reasons. First, subsurface hydraulic properties are heterogeneous and the high costs associated with site characterization leads to data scarcity (therefore, model predictions are uncertain). Second, it is common for water agencies to schedule groundwater extraction through a temporal sequence of pumping rates to maximize the benefits to anthropogenic activities and minimize the environmental footprint of the withdrawal operations. The temporal variability in pumping rates and aquifer heterogeneity affect dilution rates of contaminant plumes and chemical concentration breakthrough curves (BTCs) at the well. While contaminant transport under steady-state pumping is widely studied, the manner in which a given time-varying pumping schedule affects contaminant plume behavior is tackled only marginally. At the same time, most studies focus on the impact of Gaussian random hydraulic conductivity (K) fields on transport. Here, we systematically analyze the significance of the random space function (RSF) model characterizing K in the presence of distinct pumping operations on the uncertainty of the concentration BTC at the operating well. We juxtapose Monte Carlo based numerical results associated with two models: (a) a recently proposed Generalized Sub-Gaussian model which allows capturing non-Gaussian statistical scaling features of RSFs such as hydraulic conductivity, and (b) the commonly used Gaussian field approximation. Our novel results include an appraisal of the coupled effect of (a) the model employed to depict the random spatial variability of K and (b) transient flow regime, as induced by a temporally varying pumping schedule, on the concentration BTC at the operating well. We systematically quantify the sensitivity of the uncertainty in the contaminant BTC to the RSF model adopted for K (non-Gaussian or Gaussian) in the presence of diverse well pumping schedules. Results contribute to determine conditions under which any of these two key factors prevails on the other.

  20. Interpreting findings from Mendelian randomization using the MR-Egger method.

    PubMed

    Burgess, Stephen; Thompson, Simon G

    2017-05-01

    Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.

  1. Analysis and Reduction of Complex Networks Under Uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger G

    2014-07-31

    This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less

  2. Quality of Work Life and Organizational Climate of Schools Located along the Thai-Cambodian Borders

    ERIC Educational Resources Information Center

    Kitratporn, Poonsook; Puncreobutr, Vichian

    2016-01-01

    The purpose of the study is to measure the Quality of Work Life and Organizational Climate of Schools located along the Thai-Cambodian borders. The study intended to measure the relationship between the two underlying variables quality of work life and organizational climate. Simple random sample of 384 respondents were administrators and teachers…

  3. Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il

    A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less

  4. Risk management for sulfur dioxide abatement under multiple uncertainties

    NASA Astrophysics Data System (ADS)

    Dai, C.; Sun, W.; Tan, Q.; Liu, Y.; Lu, W. T.; Guo, H. C.

    2016-03-01

    In this study, interval-parameter programming, two-stage stochastic programming (TSP), and conditional value-at-risk (CVaR) were incorporated into a general optimization framework, leading to an interval-parameter CVaR-based two-stage programming (ICTP) method. The ICTP method had several advantages: (i) its objective function simultaneously took expected cost and risk cost into consideration, and also used discrete random variables and discrete intervals to reflect uncertain properties; (ii) it quantitatively evaluated the right tail of distributions of random variables which could better calculate the risk of violated environmental standards; (iii) it was useful for helping decision makers to analyze the trade-offs between cost and risk; and (iv) it was effective to penalize the second-stage costs, as well as to capture the notion of risk in stochastic programming. The developed model was applied to sulfur dioxide abatement in an air quality management system. The results indicated that the ICTP method could be used for generating a series of air quality management schemes under different risk-aversion levels, for identifying desired air quality management strategies for decision makers, and for considering a proper balance between system economy and environmental quality.

  5. Margins of stability in young adults with traumatic transtibial amputation walking in destabilizing environments✫

    PubMed Central

    Beltran, Eduardo J.; Dingwell, Jonathan B.; Wilken, Jason M.

    2014-01-01

    Understanding how lower-limb amputation affects walking stability, specifically in destabilizing environments, is essential for developing effective interventions to prevent falls. This study quantified mediolateral margins of stability (MOS) and MOS sub-components in young individuals with traumatic unilateral transtibial amputation (TTA) and young able-bodied individuals (AB). Thirteen AB and nine TTA completed five 3-minute walking trials in a Computer Assisted Rehabilitation ENvironment (CAREN) system under three each of three test conditions: no perturbations, pseudo-random mediolateral translations of the platform, and pseudo-random mediolateral translations of the visual field. Compared to the unperturbed trials, TTA exhibited increased mean MOS and MOS variability during platform and visual field perturbations (p < 0.010). Also, AB exhibited increased mean MOS during visual field perturbations and increased MOS variability during both platform and visual field perturbations (p < 0.050). During platform perturbations, TTA exhibited significantly greater values than AB for mean MOS (p < 0.050) and MOS variability (p < 0.050); variability of the lateral distance between the center of mass (COM) and base of support at initial contact (p < 0.005); mean and variability of the range of COM motion (p < 0.010); and variability of COM peak velocity (p < 0.050). As determined by mean MOS and MOS variability, young and otherwise healthy individuals with transtibial amputation achieved stability similar to that of their able-bodied counterparts during unperturbed and visually-perturbed walking. However, based on mean and variability of MOS, unilateral transtibial amputation was shown to have affected walking stability during platform perturbations. PMID:24444777

  6. Confounding, causality, and confusion: the role of intermediate variables in interpreting observational studies in obstetrics.

    PubMed

    Ananth, Cande V; Schisterman, Enrique F

    2017-08-01

    Prospective and retrospective cohorts and case-control studies are some of the most important study designs in epidemiology because, under certain assumptions, they can mimic a randomized trial when done well. These assumptions include, but are not limited to, properly accounting for 2 important sources of bias: confounding and selection bias. While not adjusting the causal association for an intermediate variable will yield an unbiased estimate of the exposure-outcome's total causal effect, it is often that obstetricians will want to adjust for an intermediate variable to assess if the intermediate is the underlying driver of the association. Such a practice must be weighed in light of the underlying research question and whether such an adjustment is necessary should be carefully considered. Gestational age is, by far, the most commonly encountered variable in obstetrics that is often mislabeled as a confounder when, in fact, it may be an intermediate. If, indeed, gestational age is an intermediate but if mistakenly labeled as a confounding variable and consequently adjusted in an analysis, the conclusions can be unexpected. The implications of this overadjustment of an intermediate as though it were a confounder can render an otherwise persuasive study downright meaningless. This commentary provides an exposition of confounding bias, collider stratification, and selection biases, with applications in obstetrics and perinatal epidemiology. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. On the fluctuations of sums of independent random variables.

    PubMed

    Feller, W

    1969-07-01

    If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.

  8. Application of random survival forests in understanding the determinants of under-five child mortality in Uganda in the presence of covariates that satisfy the proportional and non-proportional hazards assumption.

    PubMed

    Nasejje, Justine B; Mwambi, Henry

    2017-09-07

    Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the age of five in a household, number of births in the past 5 years, wealth index, total number of children ever born and the child's birth order. The results further indicated that the predictive performance for random survival forests built using covariates including those that violate the PH assumption was higher than that for random survival forests built using only covariates that satisfy the PH assumption. Random survival forests are appealing methods in analysing public health data to understand factors strongly associated with under-five child mortality rates especially in the presence of covariates that violate the proportional hazards assumption.

  9. Biologically-variable rhythmic auditory cues are superior to isochronous cues in fostering natural gait variability in Parkinson's disease.

    PubMed

    Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S

    2017-01-01

    Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p's<0.05). In contrast, cueing with isochronous or randomly varying inter-stimulus/beat intervals removed the LRC in the stride cycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p's<0.05) but not the positive effect of biological variability. Stimulus variability and patients' propensity to synchronize play a critical role in fostering healthier gait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Poisson-Like Spiking in Circuits with Probabilistic Synapses

    PubMed Central

    Moreno-Bote, Rubén

    2014-01-01

    Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705

  11. An instrumental variable random-coefficients model for binary outcomes

    PubMed Central

    Chesher, Andrew; Rosen, Adam M

    2014-01-01

    In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048

  12. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  13. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  14. Measures of Residual Risk with Connections to Regression, Risk Tracking, Surrogate Models, and Ambiguity

    DTIC Science & Technology

    2015-01-07

    vector that helps to manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original...measures of risk. They view a random variable of interest in concert with an auxiliary random vector that helps to manage , predict and mitigate the risk

  15. Raw and Central Moments of Binomial Random Variables via Stirling Numbers

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…

  16. Predictive modeling of cardiovascular complications in incident hemodialysis patients.

    PubMed

    Ion Titapiccolo, J; Ferrario, M; Barbieri, C; Marcelli, D; Mari, F; Gatti, E; Cerutti, S; Smyth, P; Signorini, M G

    2012-01-01

    The administration of hemodialysis (HD) treatment leads to the continuous collection of a vast quantity of medical data. Many variables related to the patient health status, to the treatment, and to dialyzer settings can be recorded and stored at each treatment session. In this study a dataset of 42 variables and 1526 patients extracted from the Fresenius Medical Care database EuCliD was used to develop and apply a random forest predictive model for the prediction of cardiovascular events in the first year of HD treatment. A ridge-lasso logistic regression algorithm was then applied to the subset of variables mostly involved in the prediction model to get insights in the mechanisms underlying the incidence of cardiovascular complications in this high risk population of patients.

  17. Fast Algorithms for Estimating Mixture Parameters

    DTIC Science & Technology

    1989-08-30

    The investigation is a two year project with the first year sponsored by the Army Research Office and the second year by the National Science Foundation (Grant... Science Foundation during the coming year. Keywords: Fast algorithms; Algorithms Mixture Distribution Random Variables. (KR)...numerical testing of the accelerated fixed-point method was completed. The work on relaxation methods will be done under the sponsorship of the National

  18. Lessons learned while integrating habitat, dispersal, disturbance, and life-history traits into species habitat models under climate change

    Treesearch

    Louis R. Iverson; Anantha M. Prasad; Stephen N. Matthews; Matthew P. Peters

    2011-01-01

    We present an approach to modeling potential climate-driven changes in habitat for tree and bird species in the eastern United States. First, we took an empirical-statistical modeling approach, using randomForest, with species abundance data from national inventories combined with soil, climate, and landscape variables, to build abundance-based habitat models for 134...

  19. A Random Variable Related to the Inversion Vector of a Partial Random Permutation

    ERIC Educational Resources Information Center

    Laghate, Kavita; Deshpande, M. N.

    2005-01-01

    In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.

  20. ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES

    PubMed Central

    van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.

    2014-01-01

    SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298

  1. A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables

    ERIC Educational Resources Information Center

    Vernizzi, Graziano; Nakai, Miki

    2015-01-01

    It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…

  2. Improving Search Algorithms by Using Intelligent Coordinates

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  3. Improving search algorithms by using intelligent coordinates

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent η is self-interested; it sets its variable to maximize its own function gη. Three factors govern such a distributed algorithm’s performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit all three factors by modifying a search algorithm’s exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based “player” engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  4. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  5. Pharmacokinetic profile of extended-release versus immediate-release oral naproxen sodium after single and multiple dosing under fed and fasting conditions: two randomized, open-label trials.

    PubMed

    Laurora, Irene; Wang, Yuan

    2016-10-01

    Extended-release (ER) naproxen sodium provides pain relief for up to 24 hours with a single dose (660 mg/day). Its pharmacokinetic profile after single and multiple dosing was compared to immediate release (IR) naproxen sodium in two randomized, open-label, crossover studies, under fasting and fed conditions. Eligible healthy subjects were randomized to ER naproxen sodium 660-mg tablet once daily or IR naproxen sodium 220-mg tablet twice daily (440 mg initially, followed by 220 mg 12 hours later). Primary variables: pharmacokinetic parameters after singleday administration (day 1) and at steady state after multiple-day administration (day 6). Total exposure was comparable for both treatments under fasting and fed conditions. After fasting: peak naproxen concentrations were slightly lower with ER naproxen sodium than with IR naproxen sodium but were reached at a similar time. Fed conditions: mean peak concentrations were comparable but reached after a longer time with ER vs. IR naproxen sodium. ER naproxen sodium was well tolerated, with a similar safety profile to IR naproxen sodium. The total exposure of ER naproxen sodium (660 mg) is comparable to IR naproxen sodium (220 mg) when administered at the maximum over the counter (OTC) dose of 660-mg daily dose on a single day and over multiple days. The rate of absorption is delayed under fed conditions.

  6. [Study on correction of data bias caused by different missing mechanisms in survey of medical expenditure among students enrolling in Urban Resident Basic Medical Insurance].

    PubMed

    Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong

    2015-05-01

    The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.

  7. Short-term action potential memory and electrical restitution: A cellular computational study on the stability of cardiac repolarization under dynamic pacing

    PubMed Central

    2018-01-01

    Electrical restitution (ER) is a major determinant of repolarization stability and, under fast pacing rate, it reveals memory properties of the cardiac action potential (AP), whose dynamics have never been fully elucidated, nor their ionic mechanisms. Previous studies have looked at ER mainly in terms of changes in AP duration (APD) when the preceding diastolic interval (DI) changes and described dynamic conditions where this relationship shows hysteresis which, in turn, has been proposed as a marker of short-term AP memory and repolarization stability. By means of numerical simulations of a non-propagated human ventricular AP, we show here that measuring ER as APD versus the preceding cycle length (CL) provides additional information on repolarization dynamics which is not contained in the companion formulation. We focus particularly on fast pacing rate conditions with a beat-to-beat variable CL, where memory properties emerge from APD vs CL and not from APD vs DI and should thus be stored in APD and not in DI. We provide an ion-currents characterization of such conditions under periodic and random CL variability, and show that the memory stored in APD plays a stabilizing role on AP repolarization under pacing rate perturbations. The gating kinetics of L-type calcium current seems to be the main determinant of this safety mechanism. We also show that, at fast pacing rate and under otherwise identical pacing conditions, a periodically beat-to-beat changing CL is more effective than a random one in stabilizing repolarization. In summary, we propose a novel view of short-term AP memory, differentially stored between systole and diastole, which opens a number of methodological and theoretical implications for the understanding of arrhythmia development. PMID:29494628

  8. Rethinking "normal": The role of stochasticity in the phenology of a synchronously breeding seabird.

    PubMed

    Youngflesh, Casey; Jenouvrier, Stephanie; Hinke, Jefferson T; DuBois, Lauren; St Leger, Judy; Trivelpiece, Wayne Z; Trivelpiece, Susan G; Lynch, Heather J

    2018-05-01

    Phenological changes have been observed in a variety of systems over the past century. There is concern that, as a consequence, ecological interactions are becoming increasingly mismatched in time, with negative consequences for ecological function. Significant spatial heterogeneity (inter-site) and temporal variability (inter-annual) can make it difficult to separate intrinsic, extrinsic and stochastic drivers of phenological variability. The goal of this study was to understand the timing and variability in breeding phenology of Adélie penguins under fixed environmental conditions and to use those data to identify a "null model" appropriate for disentangling the sources of variation in wild populations. Data on clutch initiation were collected from both wild and captive populations of Adélie penguins. Clutch initiation in the captive population was modelled as a function of year, individual and age to better understand phenological patterns observed in the wild population. Captive populations displayed as much inter-annual variability in breeding phenology as wild populations, suggesting that variability in breeding phenology is the norm and thus may be an unreliable indicator of environmental forcing. The distribution of clutch initiation dates was found to be moderately asymmetric (right skewed) both in the wild and in captivity, consistent with the pattern expected under social facilitation. The role of stochasticity in phenological processes has heretofore been largely ignored. However, these results suggest that inter-annual variability in breeding phenology can arise independent of any environmental or demographic drivers and that synchronous breeding can enhance inherent stochasticity. This complicates efforts to relate phenological variation to environmental variability in the wild. Accordingly, we must be careful to consider random forcing in phenological processes, lest we fit models to data dominated by random noise. This is particularly true for colonial species where breeding synchrony may outweigh each individual's effort to time breeding with optimal environmental conditions. Our study highlights the importance of identifying appropriate null models for studying phenology. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.

  9. Psychotherapy integration under scrutiny: investigating the impact of integrating emotion-focused components into a CBT-based approach: a study protocol of a randomized controlled trial.

    PubMed

    Babl, Anna; Grosse Holtforth, Martin; Heer, Sara; Lin, Mu; Stähli, Annabarbara; Holstein, Dominique; Belz, Martina; Egenolf, Yvonne; Frischknecht, Eveline; Ramseyer, Fabian; Regli, Daniel; Schmied, Emma; Flückiger, Christoph; Brodbeck, Jeannette; Berger, Thomas; Caspar, Franz

    2016-11-24

    This currently recruiting randomized controlled trial investigates the effects of integrating components of Emotion-Focused Therapy (EFT) into Psychological Therapy (PT), an integrative form of cognitive-behavioral therapy in a manner that is directly mirroring common integrative practice in the sense of assimilative integration. Aims of the study are to understand how both, an existing therapy approach as well as the elements to be integrated, are affected by the integration and to clarify the role of emotional processing as a mediator of therapy outcome. A total of 130 adults with a diagnosed unipolar depressive, anxiety or adjustment disorder (seeking treatment at a psychotherapy outpatient clinic) are randomized to either treatment as usual (PT) with integrated emotion-focused components (TAU + EFT) or PT (TAU). Primary outcome variables are psychopathology and symptom severity at the end of therapy and at follow up; secondary outcome variables are interpersonal problems, psychological wellbeing, quality of life, attainment of individual therapy goals, and emotional competency. Furthermore, process variables such as the quality of the therapeutic relationship are studied as well as aptitude-treatment interactions. Variables are assessed at baseline, after 8 and 16 sessions, at the end of therapy, after 25 ± 3 sessions, and at 6, 12 and 36 month follow-up. Underlying mechanisms of change are investigated. Statistical analyses will be conducted using the appropriate multilevel approaches, mainly two-level regression and growth analysis. The results of this study will indicate whether the integration of emotion-focused elements into treatment as usual increases the effectiveness of Psychological Therapy. If advantages are found, which may be limited to particular variables or subgroups of patients, recommendations for a systematic integration, and caveats if also disadvantages are detected, can be formulated. On a more abstract level, a cognitive behavioral (represented by PT) and humanistic/experiential (represented by EFT) approach will be integrated. It must be emphasized that mimicking common practice in the development and continued education of psychotherapists, EFT is not integrated as a whole, but only elements of EFT that are considered particularly important, and can be trained in an 8-day training plus supervision of therapies. ClinicalTrials.gov, NCT02822443 , 22 June 2016, retrospectively registered.

  10. Stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation.

    PubMed

    Wang, Deli; Xu, Wei; Zhao, Xiangrong

    2016-03-01

    This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.

  11. Theory and generation of conditional, scalable sub-Gaussian random fields

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2016-03-01

    Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.

  12. Harnessing the Bethe free energy†

    PubMed Central

    Bapst, Victor

    2016-01-01

    ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178

  13. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  14. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  15. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  16. High performance frame synchronization for continuous variable quantum key distribution systems.

    PubMed

    Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua

    2015-08-24

    Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.

  17. Analysis of force profile during a maximum voluntary isometric contraction task.

    PubMed

    Househam, Elizabeth; McAuley, John; Charles, Thompson; Lightfoot, Timothy; Swash, Michael

    2004-03-01

    This study analyses maximum voluntary isometric contraction (MVIC) and its measurement by recording the force profile during maximal-effort, 7-s hand-grip contractions. Six healthy subjects each performed three trials repeated at short intervals to study variation from fatigue. These three trials were performed during three separate sessions at daily intervals to look at random variation. A pattern of force development during a trial was identified. An initiation phase, with or without an initiation peak, was followed by a maintenance phase, sometimes with secondary pulses and an underlying decline in force. Of these three MVIC parameters, maximum force during the maintenance phase showed less random variability compared to intertrial fatigue variability than did maximum force during the initiation phase or absolute maximum force. Analysis of MVIC as a task, rather than a single, maximal value reveals deeper levels of motor control in its generation. Thus, force parameters other than the absolute maximum force may be better suited to quantification of muscle performance in health and disease.

  18. A Paradox-based data collection and management system for multi-center randomized clinical trials.

    PubMed

    Abdellatif, Mazen; Reda, Domenic J

    2004-02-01

    We have developed a Paradox-based data collection and management system for large-scale multi-site randomized clinical trials. The system runs under Windows operating system and integrates Symantec pcAnywhere32 telecommunications software for data transmission and remote control sessions, PKZIP utility for the compression/decompression of transmitted data, and Stat/Transfer for exporting the centralized Paradox database for analyses. We initially developed this system for VA Cooperative Study #399 'The Effect of Antiarrhythmic Therapy in Maintaining Stability of Sinus Rhythm in Atrial Fibrillation', which collects over 1000 variables on 706 patients at 20 sites. Patient intake for this 5-year study began in March of 1998. We have also developed an enhanced version of this system, which is being used in the NIH-funded 'Glucosamine/Chondroitin Arthritis Intervention Trial (GAIT)' that collects over 1200 variables on 1588 patients at 13 sites. Patient intake for this 4-year study began in October of 2000.

  19. A stochastic-geometric model of soil variation in Pleistocene patterned ground

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc

    2013-04-01

    In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.

  20. Regression Discontinuity for Causal Effect Estimation in Epidemiology.

    PubMed

    Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till

    Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.

  1. Examining solutions to missing data in longitudinal nursing research.

    PubMed

    Roberts, Mary B; Sullivan, Mary C; Winchester, Suzy B

    2017-04-01

    Longitudinal studies are highly valuable in pediatrics because they provide useful data about developmental patterns of child health and behavior over time. When data are missing, the value of the research is impacted. The study's purpose was to (1) introduce a three-step approach to assess and address missing data and (2) illustrate this approach using categorical and continuous-level variables from a longitudinal study of premature infants. A three-step approach with simulations was followed to assess the amount and pattern of missing data and to determine the most appropriate imputation method for the missing data. Patterns of missingness were Missing Completely at Random, Missing at Random, and Not Missing at Random. Missing continuous-level data were imputed using mean replacement, stochastic regression, multiple imputation, and fully conditional specification (FCS). Missing categorical-level data were imputed using last value carried forward, hot-decking, stochastic regression, and FCS. Simulations were used to evaluate these imputation methods under different patterns of missingness at different levels of missing data. The rate of missingness was 16-23% for continuous variables and 1-28% for categorical variables. FCS imputation provided the least difference in mean and standard deviation estimates for continuous measures. FCS imputation was acceptable for categorical measures. Results obtained through simulation reinforced and confirmed these findings. Significant investments are made in the collection of longitudinal data. The prudent handling of missing data can protect these investments and potentially improve the scientific information contained in pediatric longitudinal studies. © 2017 Wiley Periodicals, Inc.

  2. The effects of demand uncertainty on strategic gaming in the merit-order electricity pool market

    NASA Astrophysics Data System (ADS)

    Frem, Bassam

    In a merit-order electricity pool market, generating companies (Gencos) game with their offered incremental cost to meet the electricity demand and earn bigger market shares and higher profits. However when the demand is treated as a random variable instead of as a known constant, these Genco gaming strategies become more complex. After a brief introduction of electricity markets and gaming, the effects of demand uncertainty on strategic gaming are studied in two parts: (1) Demand modelled as a discrete random variable (2) Demand modelled as a continuous random variable. In the first part, we proposed an algorithm, the discrete stochastic strategy (DSS) algorithm that generates a strategic set of offers from the perspective of the Gencos' profits. The DSS offers were tested and compared to the deterministic Nash equilibrium (NE) offers based on the predicted demand. This comparison, based on the expected Genco profits, showed the DSS to be a better strategy in a probabilistic sense than the deterministic NE. In the second part, we presented three gaming strategies: (1) Deterministic NE (2) No-Risk (3) Risk-Taking. The strategies were then tested and their profit performances were compared using two assessment tools: (a) Expected value and standard deviation (b) Inverse cumulative distribution. We concluded that despite yielding higher profit performance under the right conjectures, Risk-Taking strategies are very sensitive to incorrect conjectures on the competitors' gaming decisions. As such, despite its lower profit performance, the No-Risk strategy was deemed preferable.

  3. Cognitive mechanisms of sleep outcomes in a randomized clinical trial of internet-based cognitive behavioral therapy for insomnia.

    PubMed

    Chow, Philip I; Ingersoll, Karen S; Thorndike, Frances P; Lord, Holly R; Gonder-Frederick, Linda; Morin, Charles M; Ritterband, Lee M

    2018-07-01

    The aim of this study was to investigate in a randomized clinical trial the role of sleep-related cognitive variables in the long-term efficacy of an online, fully automated cognitive behavioral therapy intervention for insomnia (CBT-I) (Sleep Healthy Using the Internet [SHUTi]). Three hundred and three participants (M age  = 43.3 years; SD = 11.6) were randomly assigned to SHUTi or an online patient education condition and assessed at baseline, postintervention (nine weeks after baseline), and six and 12 months after the intervention period. Cognitive variables were self-reported internal and chance sleep locus of control, dysfunctional beliefs and attitudes about sleep (DBAS), sleep specific self-efficacy, and insomnia knowledge. Primary outcomes were self-reported online ratings of insomnia severity (Insomnia Severity Index), and sleep onset latency and wake after sleep onset from online sleep diaries, collected 12 months after the intervention period. Those who received SHUTi had, at postassessment, higher levels of insomnia knowledge (95% confidence interval [CI] = 0.10-0.16) and internal sleep locus of control (95% CI = 0.04-0.55) as well as lower DBAS (95% CI = 1.52-2.39) and sleep locus of control attributed to chance (95% CI = 0.15-0.71). Insomnia knowledge, chance sleep locus of control, and DBAS mediated the relationship between condition and at least one 12-month postassessment sleep outcome. Within the SHUTi condition, changes in each cognitive variable (with the exception of internal sleep locus of control) predicted improvement in at least one sleep outcome one year later. Online CBT-I may reduce the enormous public health burden of insomnia by changing underlying cognitive variables that lead to long-term changes in sleep outcomes. Published by Elsevier B.V.

  4. Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.

    2016-12-01

    The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.

  5. Qualitatively Assessing Randomness in SVD Results

    NASA Astrophysics Data System (ADS)

    Lamb, K. W.; Miller, W. P.; Kalra, A.; Anderson, S.; Rodriguez, A.

    2012-12-01

    Singular Value Decomposition (SVD) is a powerful tool for identifying regions of significant co-variability between two spatially distributed datasets. SVD has been widely used in atmospheric research to define relationships between sea surface temperatures, geopotential height, wind, precipitation and streamflow data for myriad regions across the globe. A typical application for SVD is to identify leading climate drivers (as observed in the wind or pressure data) for a particular hydrologic response variable such as precipitation, streamflow, or soil moisture. One can also investigate the lagged relationship between a climate variable and the hydrologic response variable using SVD. When performing these studies it is important to limit the spatial bounds of the climate variable to reduce the chance of random co-variance relationships being identified. On the other hand, a climate region that is too small may ignore climate signals which have more than a statistical relationship to a hydrologic response variable. The proposed research seeks to identify a qualitative method of identifying random co-variability relationships between two data sets. The research identifies the heterogeneous correlation maps from several past results and compares these results with correlation maps produced using purely random and quasi-random climate data. The comparison identifies a methodology to determine if a particular region on a correlation map may be explained by a physical mechanism or is simply statistical chance.

  6. Effect of information, uncertainty and parameter variability on profits in a queue with various pricing strategies

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Li, Shiyong

    2014-08-01

    This paper presents an unobservable single-server queueing system with three types of uncertainty, where the service rate, or waiting cost or service quality is random variable that may obtain n(n > 2) values. The information about the realised values of parameters is only known to the server. We are concerned about the server's behaviour: revealing or concealing the information to customers. The n-value assumption and the server's behaviour enable us to consider various pricing strategies. In this paper, we analyse the effect of information and uncertainty on profits and make comparisons between the profits under different pricing strategies. Moreover, as for parameter variability reflected by the number of each parameter's possible choices n, we observe the effect of variable n on all types of profits and find that revealing the parameter information can much more benefit the server with the increase of n.

  7. Variabilities detected by acoustic emission from filament-wound Aramid fiber/epoxy composite pressure vessels

    NASA Technical Reports Server (NTRS)

    Hamstad, M. A.

    1978-01-01

    Two hundred and fifty Aramid fiber/epoxy pressure vessels were filament-wound over spherical aluminum mandrels under controlled conditions typical for advanced filament-winding. A random set of 30 vessels was proof-tested to 74% of the expected burst pressure; acoustic emission data were obtained during the proof test. A specially designed fixture was used to permit in situ calibration of the acoustic emission system for each vessel by the fracture of a 4-mm length of pencil lead (0.3 mm in diameter) which was in contact with the vessel. Acoustic emission signatures obtained during testing showed larger than expected variabilities in the mechanical damage done during the proof tests. To date, identification of the cause of these variabilities has not been determined.

  8. Controlling dust from concrete saw cutting.

    PubMed

    Shepherd, Susan; Woskie, Susan

    2013-01-01

    Cutting concrete with gas-powered saws is ubiquitous in the construction industry and a source of exposure to respirable crystalline silica. Volunteers from the New England Laborers Training Center were recruited to participate in a field experiment examining dust reductions through the use of water, from a hose and from a sprayer, as a dust control. In four series of tests, reinforced concrete pipe was cut under both "dry" and "wet" control conditions. Overall, the geometric mean respirable dust concentration for "dry" cutting (14.396 mg/m³) exceeded both types of water-based controls by more than tenfold. Wet cutting reduced the respirable dust concentration by 85% compared with dry cutting when comparing tests paired by person and saw blade (n = 79 pairs). Using a respirable cyclone, a total of 178 samples were taken. Due to the high variability in dust exposure found in this and other studies of saw cutting, the data were examined for potential exposure determinants that contribute to that variability. Using mixed models, three fixed effects were statistically significant: control condition, worker experience, and location. A random effect for subject was included in the model to account for repeated measures. When each of the significant fixed effects was included with the random effect, it was apparent that inclusion of worker experience or location reduced the between-worker component of exposure variability, while inclusion of control condition (wet vs. dry) explained a large portion of the within-subject variability. Overall, the fixed effect variable for control condition explained the largest fraction of the total exposure variability.

  9. Cross-country transferability of multi-variable damage models

    NASA Astrophysics Data System (ADS)

    Wagenaar, Dennis; Lüdtke, Stefan; Kreibich, Heidi; Bouwer, Laurens

    2017-04-01

    Flood damage assessment is often done with simple damage curves based only on flood water depth. Additionally, damage models are often transferred in space and time, e.g. from region to region or from one flood event to another. Validation has shown that depth-damage curve estimates are associated with high uncertainties, particularly when applied in regions outside the area where the data for curve development was collected. Recently, progress has been made with multi-variable damage models created with data-mining techniques, i.e. Bayesian Networks and random forest. However, it is still unknown to what extent and under which conditions model transfers are possible and reliable. Model validations in different countries will provide valuable insights into the transferability of multi-variable damage models. In this study we compare multi-variable models developed on basis of flood damage datasets from Germany as well as from The Netherlands. Data from several German floods was collected using computer aided telephone interviews. Data from the 1993 Meuse flood in the Netherlands is available, based on compensations paid by the government. The Bayesian network and random forest based models are applied and validated in both countries on basis of the individual datasets. A major challenge was the harmonization of the variables between both datasets due to factors like differences in variable definitions, and regional and temporal differences in flood hazard and exposure characteristics. Results of model validations and comparisons in both countries are discussed, particularly in respect to encountered challenges and possible solutions for an improvement of model transferability.

  10. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  11. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  12. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  13. Controls on the spatial variability of key soil properties: comparing field data with a mechanistic soilscape evolution model

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, T.; Román, A.; Giraldez, J. V.

    2016-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of a geostatistical versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  14. Optimizing Constrained Single Period Problem under Random Fuzzy Demand

    NASA Astrophysics Data System (ADS)

    Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin

    2008-09-01

    In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.

  15. Compliance-Effect Correlation Bias in Instrumental Variables Estimators

    ERIC Educational Resources Information Center

    Reardon, Sean F.

    2010-01-01

    Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…

  16. Investigation of hit-and-run crash occurrence and severity using real-time loop detector data and hierarchical Bayesian binary logit model with random effects.

    PubMed

    Xie, Meiquan; Cheng, Wen; Gill, Gurdiljot Singh; Zhou, Jiao; Jia, Xudong; Choi, Simon

    2018-02-17

    Most of the extensive research dedicated to identifying the influential factors of hit-and-run (HR) crashes has utilized typical maximum likelihood estimation binary logit models, and none have employed real-time traffic data. To fill this gap, this study focused on investigating factors contributing to HR crashes, as well as the severity levels of HR. This study analyzed 4-year crash and real-time loop detector data by employing hierarchical Bayesian models with random effects within a sequential logit structure. In addition to evaluation of the impact of random effects on model fitness and complexity, the prediction capability of the models was examined. Stepwise incremental sensitivity and specificity were calculated and receiver operating characteristic (ROC) curves were utilized to graphically illustrate the predictive performance of the model. Among the real-time flow variables, the average occupancy and speed from the upstream detector were observed to be positively correlated with HR crash possibility. The average upstream speed and speed difference between upstream and downstream speeds were correlated with the occurrence of severe HR crashes. In addition to real-time factors, other variables found influential for HR and severe HR crashes were length of segment, adverse weather conditions, dark lighting conditions with malfunctioning street lights, driving under the influence of alcohol, width of inner shoulder, and nighttime. This study suggests the potential traffic conditions of HR and severe HR occurrence, which refer to relatively congested upstream traffic conditions with high upstream speed and significant speed deviations on long segments. The above findings suggest that traffic enforcement should be directed toward mitigating risky driving under the aforementioned traffic conditions. Moreover, enforcement agencies may employ alcohol checkpoints to counter driving under the influence (DUI) at night. With regard to engineering improvements, wider inner shoulders may be constructed to potentially reduce HR cases and street lights should be installed and maintained in working condition to make roads less prone to such crashes.

  17. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  18. Anderson localization for radial tree-like random quantum graphs

    NASA Astrophysics Data System (ADS)

    Hislop, Peter D.; Post, Olaf

    We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.

  19. Does the central limit theorem always apply to phase noise? Some implications for radar problems

    NASA Astrophysics Data System (ADS)

    Gray, John E.; Addison, Stephen R.

    2017-05-01

    The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.

  20. Using Atmospheric Circulation Patterns to Detect and Attribute Changes in the Risk of Extreme Climate Events

    NASA Astrophysics Data System (ADS)

    Diffenbaugh, N. S.; Horton, D. E.; Singh, D.; Swain, D. L.; Touma, D. E.; Mankin, J. S.

    2015-12-01

    Because of the high cost of extreme events and the growing evidence that global warming is likely to alter the statistical distribution of climate variables, detection and attribution of changes in the probability of extreme climate events has become a pressing topic for the scientific community, elected officials, and the public. While most of the emphasis has thus far focused on analyzing the climate variable of interest (most often temperature or precipitation, but also flooding and drought), there is an emerging emphasis on applying detection and attribution analysis techniques to the underlying physical causes of individual extreme events. This approach is promising in part because the underlying physical causes (such as atmospheric circulation patterns) can in some cases be more accurately represented in climate models than the more proximal climate variable (such as precipitation). In addition, and more scientifically critical, is the fact that the most extreme events result from a rare combination of interacting causes, often referred to as "ingredients". Rare events will therefore always have a strong influence of "natural" variability. Analyzing the underlying physical mechanisms can therefore help to test whether there have been changes in the probability of the constituent conditions of an individual event, or whether the co-occurrence of causal conditions cannot be distinguished from random chance. This presentation will review approaches to applying detection/attribution analysis to the underlying physical causes of extreme events (including both "thermodynamic" and "dynamic" causes), and provide a number of case studies, including the role of frequency of atmospheric circulation patterns in the probability of hot, cold, wet and dry events.

  1. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  3. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    ERIC Educational Resources Information Center

    Bancroft, Stacie L.; Bourret, Jason C.

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…

  4. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  5. Random Variables: Simulations and Surprising Connections.

    ERIC Educational Resources Information Center

    Quinn, Robert J.; Tomlinson, Stephen

    1999-01-01

    Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)

  6. Binomial leap methods for simulating stochastic chemical kinetics.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2004-12-01

    This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.

  7. Estimating under-five mortality in space and time in a developing world context.

    PubMed

    Wakefield, Jon; Fuglstad, Geir-Arne; Riebler, Andrea; Godwin, Jessica; Wilson, Katie; Clark, Samuel J

    2018-01-01

    Accurate estimates of the under-five mortality rate in a developing world context are a key barometer of the health of a nation. This paper describes a new model to analyze survey data on mortality in this context. We are interested in both spatial and temporal description, that is wishing to estimate under-five mortality rate across regions and years and to investigate the association between the under-five mortality rate and spatially varying covariate surfaces. We illustrate the methodology by producing yearly estimates for subnational areas in Kenya over the period 1980-2014 using data from the Demographic and Health Surveys, which use stratified cluster sampling. We use a binomial likelihood with fixed effects for the urban/rural strata and random effects for the clustering to account for the complex survey design. Smoothing is carried out using Bayesian hierarchical models with continuous spatial and temporally discrete components. A key component of the model is an offset to adjust for bias due to the effects of HIV epidemics. Substantively, there has been a sharp decline in Kenya in the under-five mortality rate in the period 1980-2014, but large variability in estimated subnational rates remains. A priority for future research is understanding this variability. In exploratory work, we examine whether a variety of spatial covariate surfaces can explain the variability in under-five mortality rate. Temperature, precipitation, a measure of malaria infection prevalence, and a measure of nearness to cities were candidates for inclusion in the covariate model, but the interplay between space, time, and covariates is complex.

  8. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  9. Investigating Factorial Invariance of Latent Variables Across Populations When Manifest Variables Are Missing Completely

    PubMed Central

    Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.

    2013-01-01

    Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738

  10. A unified approach for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-09-01

    Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.

  11. Examining Solutions to Missing Data in Longitudinal Nursing Research

    PubMed Central

    Roberts, Mary B.; Sullivan, Mary C.; Winchester, Suzy B.

    2017-01-01

    Purpose Longitudinal studies are highly valuable in pediatrics because they provide useful data about developmental patterns of child health and behavior over time. When data are missing, the value of the research is impacted. The study’s purpose was to: (1) introduce a 3-step approach to assess and address missing data; (2) illustrate this approach using categorical and continuous level variables from a longitudinal study of premature infants. Methods A three-step approach with simulations was followed to assess the amount and pattern of missing data and to determine the most appropriate imputation method for the missing data. Patterns of missingness were Missing Completely at Random, Missing at Random, and Not Missing at Random. Missing continuous-level data were imputed using mean replacement, stochastic regression, multiple imputation, and fully conditional specification. Missing categorical-level data were imputed using last value carried forward, hot-decking, stochastic regression, and fully conditional specification. Simulations were used to evaluate these imputation methods under different patterns of missingness at different levels of missing data. Results The rate of missingness was 16–23% for continuous variables and 1–28% for categorical variables. Fully conditional specification imputation provided the least difference in mean and standard deviation estimates for continuous measures. Fully conditional specification imputation was acceptable for categorical measures. Results obtained through simulation reinforced and confirmed these findings. Practice Implications Significant investments are made in the collection of longitudinal data. The prudent handling of missing data can protect these investments and potentially improve the scientific information contained in pediatric longitudinal studies. PMID:28425202

  12. Fault Diagnosis Strategies for SOFC-Based Power Generation Plants

    PubMed Central

    Costamagna, Paola; De Giorgi, Andrea; Gotelli, Alberto; Magistri, Loredana; Moser, Gabriele; Sciaccaluga, Emanuele; Trucco, Andrea

    2016-01-01

    The success of distributed power generation by plants based on solid oxide fuel cells (SOFCs) is hindered by reliability problems that can be mitigated through an effective fault detection and isolation (FDI) system. However, the numerous operating conditions under which such plants can operate and the random size of the possible faults make identifying damaged plant components starting from the physical variables measured in the plant very difficult. In this context, we assess two classical FDI strategies (model-based with fault signature matrix and data-driven with statistical classification) and the combination of them. For this assessment, a quantitative model of the SOFC-based plant, which is able to simulate regular and faulty conditions, is used. Moreover, a hybrid approach based on the random forest (RF) classification method is introduced to address the discrimination of regular and faulty situations due to its practical advantages. Working with a common dataset, the FDI performances obtained using the aforementioned strategies, with different sets of monitored variables, are observed and compared. We conclude that the hybrid FDI strategy, realized by combining a model-based scheme with a statistical classifier, outperforms the other strategies. In addition, the inclusion of two physical variables that should be measured inside the SOFCs can significantly improve the FDI performance, despite the actual difficulty in performing such measurements. PMID:27556472

  13. Effect of Linagliptin Versus Metformin on Glycemic Variability in Patients with Impaired Glucose Tolerance.

    PubMed

    González-Heredia, Tonatiuh; Hernández-Corona, Diana M; González-Ortiz, Manuel; Martínez-Abundis, Esperanza

    2017-08-01

    Impaired glucose tolerance (IGT) and glycemic variability may be associated with increased risk of micro- and macrovascular complications. The aim of this study was to assess the effect of linagliptin versus metformin on glycemic variability in patients with IGT. A randomized, double-blind clinical trial with parallel groups was carried out in 16 adult patients with IGT, overweight or obesity. All patients signed an informed consent. The therapies were randomly assigned: (a) metformin 500 mg bid (n = 8) or (b) linagliptin 5 mg a.m. and placebo p.m. (n = 8), both for 90 days. At the beginning of the trial and 3 months later, fasting glucose, glycated hemoglobin A1c, oral glucose tolerance test (OGTT), and glycemic variability [area under the curve (AUC) of glucose, mean amplitude of glycemic excursion (MAGE), standard deviation (SD) of glucose, coefficient of variation (CV) of glucose, and mean blood glucose (MBG)] were measured. Mann-Whitney U, Wilcoxon, and Fisher exact tests were used for statistical analyses. Both groups were similar in basal characteristics. After linagliptin administration, a significant decrease in glucose levels at 120 min of OGTT (9.0 ± 0.9 vs. 6.9 ± 2.2 mmol/L, P = 0.012) was observed. Glycemic variability showed a similar behavior and there were no significant differences in the AUC, MAGE, SD of glucose, CV of glucose, and MBG between groups. Linagliptin administration resulted in better glycemic control according to the decrease of glucose levels by the OGTT at 120 min in patients with IGT. Meanwhile, glycemic variability was not modified in any of the study groups.

  14. A comparison of rule-based and machine learning approaches for classifying patient portal messages.

    PubMed

    Cronin, Robert M; Fabbri, Daniel; Denny, Joshua C; Rosenbloom, S Trent; Jackson, Gretchen Purcell

    2017-09-01

    Secure messaging through patient portals is an increasingly popular way that consumers interact with healthcare providers. The increasing burden of secure messaging can affect clinic staffing and workflows. Manual management of portal messages is costly and time consuming. Automated classification of portal messages could potentially expedite message triage and delivery of care. We developed automated patient portal message classifiers with rule-based and machine learning techniques using bag of words and natural language processing (NLP) approaches. To evaluate classifier performance, we used a gold standard of 3253 portal messages manually categorized using a taxonomy of communication types (i.e., main categories of informational, medical, logistical, social, and other communications, and subcategories including prescriptions, appointments, problems, tests, follow-up, contact information, and acknowledgement). We evaluated our classifiers' accuracies in identifying individual communication types within portal messages with area under the receiver-operator curve (AUC). Portal messages often contain more than one type of communication. To predict all communication types within single messages, we used the Jaccard Index. We extracted the variables of importance for the random forest classifiers. The best performing approaches to classification for the major communication types were: logistic regression for medical communications (AUC: 0.899); basic (rule-based) for informational communications (AUC: 0.842); and random forests for social communications and logistical communications (AUCs: 0.875 and 0.925, respectively). The best performing classification approach of classifiers for individual communication subtypes was random forests for Logistical-Contact Information (AUC: 0.963). The Jaccard Indices by approach were: basic classifier, Jaccard Index: 0.674; Naïve Bayes, Jaccard Index: 0.799; random forests, Jaccard Index: 0.859; and logistic regression, Jaccard Index: 0.861. For medical communications, the most predictive variables were NLP concepts (e.g., Temporal_Concept, which maps to 'morning', 'evening' and Idea_or_Concept which maps to 'appointment' and 'refill'). For logistical communications, the most predictive variables contained similar numbers of NLP variables and words (e.g., Telephone mapping to 'phone', 'insurance'). For social and informational communications, the most predictive variables were words (e.g., social: 'thanks', 'much', informational: 'question', 'mean'). This study applies automated classification methods to the content of patient portal messages and evaluates the application of NLP techniques on consumer communications in patient portal messages. We demonstrated that random forest and logistic regression approaches accurately classified the content of portal messages, although the best approach to classification varied by communication type. Words were the most predictive variables for classification of most communication types, although NLP variables were most predictive for medical communication types. As adoption of patient portals increases, automated techniques could assist in understanding and managing growing volumes of messages. Further work is needed to improve classification performance to potentially support message triage and answering. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  16. A comparative injury severity analysis of motorcycle at-fault crashes on rural and urban roadways in Alabama.

    PubMed

    Islam, Samantha; Brown, Joshua

    2017-11-01

    The research described in this paper explored the factors contributing to the injury severity resulting from the motorcycle at-fault accidents in rural and urban areas in Alabama. Given the occurrence of a motorcycle at-fault crash, random parameter logit models of injury severity (with possible outcomes of fatal, major, minor, and possible or no injury) were estimated. The estimated models identified a variety of statistically significant factors influencing the injury severities resulting from motorcycle at-fault crashes. According to these models, some variables were found to be significant only in one model (rural or urban) but not in the other one. For example, variables such as clear weather, young motorcyclists, and roadway without light were found significant only in the rural model. On the other hand, variables such as older female motorcyclists, horizontal curve and at intersection were found significant only in the urban model. In addition, some variables (such as, motorcyclists under influence of alcohol, non-usage of helmet, high speed roadways, etc.) were found significant in both models. Also, estimation findings showed that two parameters (clear weather and roadway without light) in the rural model and one parameter (on weekend) in the urban model could be modeled as random parameters indicating their varying influences on the injury severity due to unobserved effects. Based on the results obtained, this paper discusses the effects of different variables on injury severities resulting from rural and urban motorcycle at-fault crashes and their possible explanations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. How do measurement duration and timing interact to influence estimation of basal physiological variables of a nocturnal rodent?

    PubMed

    Connolly, M K; Cooper, C E

    2014-12-01

    Metabolic rate and evaporative water loss are two commonly measured physiological variables. It is therefore important, especially for comparative studies, that these variables (and others) are measured under standardised conditions, of which a resting state during the inactive phase is part of the accepted criteria. Here we show how measurement duration and timing affect these criteria and impact on the estimation of basal metabolic rate (oxygen consumption and carbon dioxide production) and standard evaporative water loss of a small nocturnal rodent. Oxygen consumption, carbon dioxide production and evaporative water loss all decreased over the duration of an experiment. Random assortment of hourly values indicated that this was an animal rather than a random effect for up to 11h. Experimental start time also had a significant effect on measurement of physiological variables. A longer time period was required to achieve minimal carbon dioxide consumption and evaporative water loss when experiments commenced earlier in the day; however, experiments with earlier start times had a lower overall estimates of minimal oxygen consumption and carbon dioxide production. For this species, measurement duration of at least 8h, ideally commencing between before the inactive phase at 03:00h and 05:00h, is required to obtain minimal standard values for physiological variables. Up to 80% of recently published studies measuring basal metabolic rate and/or evaporative water loss of small nocturnal mammals may overestimate basal values due to insufficiently long measurement duration. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Condition interference in rats performing a choice task with switched variable- and fixed-reward conditions.

    PubMed

    Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu

    2015-01-01

    Because humans and animals encounter various situations, the ability to adaptively decide upon responses to any situation is essential. To date, however, decision processes and the underlying neural substrates have been investigated under specific conditions; thus, little is known about how various conditions influence one another in these processes. In this study, we designed a binary choice task with variable- and fixed-reward conditions and investigated neural activities of the prelimbic cortex and dorsomedial striatum in rats. Variable- and fixed-reward conditions induced flexible and inflexible behaviors, respectively; one of the two conditions was randomly assigned in each trial for testing the possibility of condition interference. Rats were successfully conditioned such that they could find the better reward holes of variable-reward-condition and fixed-reward-condition trials. A learning interference model, which updated expected rewards (i.e., values) used in variable-reward-condition trials on the basis of combined experiences of both conditions, better fit choice behaviors than conventional models which updated values in each condition independently. Thus, although rats distinguished the trial condition, they updated values in a condition-interference manner. Our electrophysiological study suggests that this interfering value-updating is mediated by the prelimbic cortex and dorsomedial striatum. First, some prelimbic cortical and striatal neurons represented the action-reward associations irrespective of trial conditions. Second, the striatal neurons kept tracking the values of variable-reward condition even in fixed-reward-condition trials, such that values were possibly interferingly updated even in the fixed-reward condition.

  19. Extraversion and cardiovascular responses to recurrent social stress: Effect of stress intensity.

    PubMed

    Lü, Wei; Xing, Wanying; Hughes, Brian M; Wang, Zhenhong

    2017-10-28

    The present study sought to establish whether the effects of extraversion on cardiovascular responses to recurrent social stress are contingent on stress intensity. A 2×5×1 mixed-factorial experiment was conducted, with social stress intensity as a between-subject variable, study phase as a within-subject variable, extraversion as a continuous independent variable, and cardiovascular parameter (HR, SBP, DBP, or RSA) as a dependent variable. Extraversion (NEO-FFI), subjective stress, and physiological stress were measured in 166 undergraduate students randomly assigned to undergo moderate (n=82) or high-intensity (n=84) social stress (a public speaking task with different levels of social evaluation). All participants underwent continuous physiological monitoring while facing two consecutive stress exposures distributed across five laboratory phases: baseline, stress exposure 1, post-stress 1, stress exposure 2, post-stress 2. Results indicated that under moderate-intensity social stress, participants higher on extraversion exhibited lesser HR reactivity to stress than participants lower on extraversion, while under high-intensity social stress, they exhibited greater HR, SBP, DBP and RSA reactivity. Under both moderate- and high-intensity social stress, participants higher on extraversion exhibited pronounced SBP and DBP response adaptation to repeated stress, and showed either better degree of HR recovery or greater amount of SBP and DBP recovery after stress. These findings suggest that individuals higher on extraversion exhibit physiological flexibility to cope with social challenges and benefit from adaptive cardiovascular responses. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Enhancement of the efficiency of the automatic control system to control the thermal load of steam boilers fired with fuels of several types

    NASA Astrophysics Data System (ADS)

    Ismatkhodzhaev, S. K.; Kuzishchin, V. F.

    2017-05-01

    An automatic control system to control the thermal load (ACS) in a drum-type boiler under random fluctuations in the blast-furnace and coke-oven gas consumption rates and to control action on the natural gas consumption is considered. The system provides for use of a compensator by the basic disturbance, the blast-furnace gas consumption rate. To enhance the performance of the system, it is proposed to use more accurate mathematical second-order delay models of the channels of the object under control in combination with calculation by frequency methods of the controller parameters as well as determination of the structure and parameters of the compensator considering the statistical characteristics of the disturbances and using simulation. The statistical characteristics of the random blast-furnace gas consumption signal based on experimental data are provided. The random signal is presented in the form of the low-frequency (LF) and high-frequency (HF) components. The models of the correlation functions and spectral densities are developed. The article presents the results of calculating the optimal settings of the control loop with the controlled variable in the form of the "heat" signal with the restricted frequency variation index using three variants of the control performance criteria, viz., the linear and quadratic integral indices under step disturbance and the control error variance under random disturbance by the blastfurnace gas consumption rate. It is recommended to select a compensator designed in the form of series connection of two parts, one of which corresponds to the operator inverse to the transfer function of the PI controller, i.e., in the form of a really differentiating element. This facilitates the realization of the second part of the compensator by the invariance condition similar to transmitting the compensating signal to the object input. The results of simulation under random disturbance by the blast-furnace gas consumption are reported. Recommendations are made on the structure and parameters of the shaping filters for modeling the LF and HF components of the random signal. The results of the research may find applications in the systems to control the thermal processes with compensation of basic disturbances, in particular, in boilers for combustion of accompanying gases.

  1. Sampling-Based Stochastic Sensitivity Analysis Using Score Functions for RBDO Problems with Correlated Random Variables

    DTIC Science & Technology

    2010-08-01

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables

  2. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  3. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  4. Bell-Boole Inequality: Nonlocality or Probabilistic Incompatibility of Random Variables?

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2008-06-01

    The main aim of this report is to inform the quantum information community about investigations on the problem of probabilistic compatibility of a family of random variables: a possibility to realize such a family on the basis of a single probability measure (to construct a single Kolmogorov probability space). These investigations were started hundred of years ago by J. Boole (who invented Boolean algebras). The complete solution of the problem was obtained by Soviet mathematician Vorobjev in 60th. Surprisingly probabilists and statisticians obtained inequalities for probabilities and correlations among which one can find the famous Bell’s inequality and its generalizations. Such inequalities appeared simply as constraints for probabilistic compatibility. In this framework one can not see a priori any link to such problems as nonlocality and “death of reality” which are typically linked to Bell’s type inequalities in physical literature. We analyze the difference between positions of mathematicians and quantum physicists. In particular, we found that one of the most reasonable explanations of probabilistic incompatibility is mixing in Bell’s type inequalities statistical data from a number of experiments performed under different experimental contexts.

  5. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  6. Mediation Analysis of the Efficacy of the Eban HIV/STD Risk-Reduction Intervention for African American HIV Serodiscordant Couples.

    PubMed

    El-Bassel, Nabila; Jemmott, John B; Bellamy, Scarlett L; Pequegnat, Willo; Wingood, Gina M; Wyatt, Gail E; Landis, J Richard; Remien, Robert H

    2016-06-01

    Targeting couples is a promising behavioral HIV risk-reduction strategy, but the mechanisms underlying the effects of such interventions are unknown. We report secondary analyses testing whether Social-Cognitive-Theory variables mediated the Eban HIV-risk-reduction intervention's effects on condom-use outcomes. In a multisite randomized controlled trial conducted in four US cities, 535 African American HIV-serodiscordant couples were randomized to the Eban HIV risk-reduction intervention or attention-matched control intervention. Outcomes were proportion condom-protected sex, consistent condom use, and frequency of unprotected sex measured pre-, immediately post-, and 6 and 12 months post-intervention. Potential mediators included Social-Cognitive-Theory variables: outcome expectancies and self-efficacy. Mediation analyses using the product-of-coefficients approach in a generalized-estimating-equations framework revealed that condom-use outcome expectancy, partner-reaction outcome expectancy, intention, self-efficacy, and safer-sex communication improved post-intervention and mediated intervention-induced improvements in condom-use outcomes. These findings underscore the importance of targeting outcome expectancies, self-efficacy, and safer-sex communication in couples-level HIV risk-reduction interventions.

  7. Uncertainty Quantification in Scale-Dependent Models of Flow in Porous Media: SCALE-DEPENDENT UQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, A. M.; Panzeri, M.; Tartakovsky, G. D.

    Equations governing flow and transport in heterogeneous porous media are scale-dependent. We demonstrate that it is possible to identify a support scalemore » $$\\eta^*$$, such that the typically employed approximate formulations of Moment Equations (ME) yield accurate (statistical) moments of a target environmental state variable. Under these circumstances, the ME approach can be used as an alternative to the Monte Carlo (MC) method for Uncertainty Quantification in diverse fields of Earth and environmental sciences. MEs are directly satisfied by the leading moments of the quantities of interest and are defined on the same support scale as the governing stochastic partial differential equations (PDEs). Computable approximations of the otherwise exact MEs can be obtained through perturbation expansion of moments of the state variables in orders of the standard deviation of the random model parameters. As such, their convergence is guaranteed only for the standard deviation smaller than one. We demonstrate our approach in the context of steady-state groundwater flow in a porous medium with a spatially random hydraulic conductivity.« less

  8. Electrical characteristic fluctuation of 16-nm-gate high-κ/metal gate bulk FinFET devices in the presence of random interface traps

    NASA Astrophysics Data System (ADS)

    Hsu, Sheng-Chia; Li, Yiming

    2014-11-01

    In this work, we study the impact of random interface traps (RITs) at the interface of SiO x /Si on the electrical characteristic of 16-nm-gate high-κ/metal gate (HKMG) bulk fin-type field effect transistor (FinFET) devices. Under the same threshold voltage, the effects of RIT position and number on the degradation of electrical characteristics are clarified with respect to different levels of RIT density of state ( D it). The variability of the off-state current ( I off) and drain-induced barrier lowering (DIBL) will be severely affected by RITs with high D it varying from 5 × 1012 to 5 × 1013 eV-1 cm-2 owing to significant threshold voltage ( V th) fluctuation. The results of this study indicate that if the level of D it is lower than 1 × 1012 eV-1 cm-2, the normalized variability of the on-state current, I off, V th, DIBL, and subthreshold swing is within 5%.

  9. Multiplicative processes in visual cognition

    NASA Astrophysics Data System (ADS)

    Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.

    2014-03-01

    The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.

  10. Study the Cyclic Plasticity Behavior of 508 LAS under Constant, Variable and Grid-Load-Following Loading Cycles for Fatigue Evaluation of PWR Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Subhasish; Barua, Bipul; Soppet, William K.

    This report provides an update of an earlier assessment of environmentally assisted fatigue for components in light water reactors. This report is a deliverable in September 2016 under the work package for environmentally assisted fatigue under DOE’s Light Water Reactor Sustainability program. In an April 2016 report, we presented a detailed thermal-mechanical stress analysis model for simulating the stress-strain state of a reactor pressure vessel and its nozzles under grid-load-following conditions. In this report, we provide stress-controlled fatigue test data for 508 LAS base metal alloy under different loading amplitudes (constant, variable, and random grid-load-following) and environmental conditions (in airmore » or pressurized water reactor coolant water at 300°C). Also presented is a cyclic plasticity-based analytical model that can simultaneously capture the amplitude and time dependency of the component behavior under fatigue loading. Results related to both amplitude-dependent and amplitude-independent parameters are presented. The validation results for the analytical/mechanistic model are discussed. This report provides guidance for estimating time-dependent, amplitude-independent parameters related to material behavior under different service conditions. The developed mechanistic models and the reported material parameters can be used to conduct more accurate fatigue and ratcheting evaluation of reactor components.« less

  11. Compensation for Lithography Induced Process Variations during Physical Design

    NASA Astrophysics Data System (ADS)

    Chin, Eric Yiow-Bing

    This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.

  12. Invited Commentary: Using Financial Credits as Instrumental Variables for Estimating the Causal Relationship Between Income and Health.

    PubMed

    Pega, Frank

    2016-05-01

    Social epidemiologists are interested in determining the causal relationship between income and health. Natural experiments in which individuals or groups receive income randomly or quasi-randomly from financial credits (e.g., tax credits or cash transfers) are increasingly being analyzed using instrumental variable analysis. For example, in this issue of the Journal, Hamad and Rehkopf (Am J Epidemiol. 2016;183(9):775-784) used an in-work tax credit called the Earned Income Tax Credit as an instrument to estimate the association between income and child development. However, under certain conditions, the use of financial credits as instruments could violate 2 key instrumental variable analytic assumptions. First, some financial credits may directly influence health, for example, through increasing a psychological sense of welfare security. Second, financial credits and health may have several unmeasured common causes, such as politics, other social policies, and the motivation to maximize the credit. If epidemiologists pursue such instrumental variable analyses, using the amount of an unconditional, universal credit that an individual or group has received as the instrument may produce the most conceptually convincing and generalizable evidence. However, other natural income experiments (e.g., lottery winnings) and other methods that allow better adjustment for confounding might be more promising approaches for estimating the causal relationship between income and health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  14. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  15. Do little interactions get lost in dark random forests?

    PubMed

    Wright, Marvin N; Ziegler, Andreas; König, Inke R

    2016-03-31

    Random forests have often been claimed to uncover interaction effects. However, if and how interaction effects can be differentiated from marginal effects remains unclear. In extensive simulation studies, we investigate whether random forest variable importance measures capture or detect gene-gene interactions. With capturing interactions, we define the ability to identify a variable that acts through an interaction with another one, while detection is the ability to identify an interaction effect as such. Of the single importance measures, the Gini importance captured interaction effects in most of the simulated scenarios, however, they were masked by marginal effects in other variables. With the permutation importance, the proportion of captured interactions was lower in all cases. Pairwise importance measures performed about equal, with a slight advantage for the joint variable importance method. However, the overall fraction of detected interactions was low. In almost all scenarios the detection fraction in a model with only marginal effects was larger than in a model with an interaction effect only. Random forests are generally capable of capturing gene-gene interactions, but current variable importance measures are unable to detect them as interactions. In most of the cases, interactions are masked by marginal effects and interactions cannot be differentiated from marginal effects. Consequently, caution is warranted when claiming that random forests uncover interactions.

  16. The influence of an uncertain force environment on reshaping trial-to-trial motor variability.

    PubMed

    Izawa, Jun; Yoshioka, Toshinori; Osu, Rieko

    2014-09-10

    Motor memory is updated to generate ideal movements in a novel environment. When the environment changes every trial randomly, how does the brain incorporate this uncertainty into motor memory? To investigate how the brain adapts to an uncertain environment, we considered a reach adaptation protocol where individuals practiced moving in a force field where a noise was injected. After they had adapted, we measured the trial-to-trial variability in the temporal profiles of the produced hand force. We found that the motor variability was significantly magnified by the adaptation to the random force field. Temporal profiles of the motor variance were significantly dissociable between two different types of random force fields experienced. A model-based analysis suggests that the variability is generated by noise in the gains of the internal model. It further suggests that the trial-to-trial motor variability magnified by the adaptation in a random force field is generated by the uncertainty of the internal model formed in the brain as a result of the adaptation.

  17. A Tutorial in Bayesian Potential Outcomes Mediation Analysis.

    PubMed

    Miočević, Milica; Gonzalez, Oscar; Valente, Matthew J; MacKinnon, David P

    2018-01-01

    Statistical mediation analysis is used to investigate intermediate variables in the relation between independent and dependent variables. Causal interpretation of mediation analyses is challenging because randomization of subjects to levels of the independent variable does not rule out the possibility of unmeasured confounders of the mediator to outcome relation. Furthermore, commonly used frequentist methods for mediation analysis compute the probability of the data given the null hypothesis, which is not the probability of a hypothesis given the data as in Bayesian analysis. Under certain assumptions, applying the potential outcomes framework to mediation analysis allows for the computation of causal effects, and statistical mediation in the Bayesian framework gives indirect effects probabilistic interpretations. This tutorial combines causal inference and Bayesian methods for mediation analysis so the indirect and direct effects have both causal and probabilistic interpretations. Steps in Bayesian causal mediation analysis are shown in the application to an empirical example.

  18. Listening to the noise: random fluctuations reveal gene network parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munsky, Brian; Khammash, Mustafa

    2009-01-01

    The cellular environment is abuzz with noise. The origin of this noise is attributed to the inherent random motion of reacting molecules that take part in gene expression and post expression interactions. In this noisy environment, clonal populations of cells exhibit cell-to-cell variability that frequently manifests as significant phenotypic differences within the cellular population. The stochastic fluctuations in cellular constituents induced by noise can be measured and their statistics quantified. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich sourcemore » of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We demonstrate that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. This establishes a potentially powerful approach for the identification of gene networks and offers a new window into the workings of these networks.« less

  19. Listening to the Noise: Random Fluctuations Reveal Gene Network Parameters

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Trinh, Brooke; Khammash, Mustafa

    2010-03-01

    The cellular environment is abuzz with noise originating from the inherent random motion of reacting molecules in the living cell. In this noisy environment, clonal cell populations exhibit cell-to-cell variability that can manifest significant prototypical differences. Noise induced stochastic fluctuations in cellular constituents can be measured and their statistics quantified using flow cytometry, single molecule fluorescence in situ hybridization, time lapse fluorescence microscopy and other single cell and single molecule measurement techniques. We show that these random fluctuations carry within them valuable information about the underlying genetic network. Far from being a nuisance, the ever-present cellular noise acts as a rich source of excitation that, when processed through a gene network, carries its distinctive fingerprint that encodes a wealth of information about that network. We demonstrate that in some cases the analysis of these random fluctuations enables the full identification of network parameters, including those that may otherwise be difficult to measure. We use theoretical investigations to establish experimental guidelines for the identification of gene regulatory networks, and we apply these guideline to experimentally identify predictive models for different regulatory mechanisms in bacteria and yeast.

  20. Implications of climate variability for monitoring the effectiveness of global mercury policy

    NASA Astrophysics Data System (ADS)

    Giang, A.; Monier, E.; Couzo, E. A.; Pike-thackray, C.; Selin, N. E.

    2016-12-01

    We investigate how climate variability affects ability to detect policy-related anthropogenic changes in mercury emissions in wet deposition monitoring data using earth system and atmospheric chemistry modeling. The Minamata Convention, a multilateral environmental agreement that aims to protect human health and the environment from anthropogenic emissions and releases of mercury, includes provisions for monitoring treaty effectiveness. Because meteorology can affect mercury chemistry and transport, internal variability is an important contributor to uncertainty in how effective policy may be in reducing the amount of mercury entering ecosystems through wet deposition. We simulate mercury chemistry using the GEOS-Chem global transport model to assess the influence of meteorology in the context of other uncertainties in mercury cycling and policy. In these simulations, we find that interannual variability in meteorology may be a dominant contributor to the spatial pattern and magnitude of historical regional wet deposition trends. To further assess the influence of climate variability in the GEOS-Chem mercury simulation, we use a 5-member ensemble of meteorological fields from the MIT Integrated Global System Model under present and future climate. Each member involves randomly initialized 20 year simulations centered around 2000 and 2050 (under a no-policy and a climate stabilization scenario). Building on previous efforts to understand climate-air quality interactions for ground-level O3 and particulate matter, we estimate from the ensemble the range of trends in mercury wet deposition given natural variability, and, to extend our previous results on regions that are sensitive to near-source vs. remote anthropogenic signals, we identify geographic regions where mercury wet deposition is most sensitive to this variability. We discuss how an improved understanding of natural variability can inform the Conference of Parties on monitoring strategy and policy ambition.

  1. Spread-Spectrum Random-Access Communications for HF Channels.

    DTIC Science & Technology

    1981-08-21

    variables Zsm which are defined by (8) with cos[.] replaced by sin[-], have the same properties as Z . cm I. 9 IIII. PERFORMANCE OF FH/SSMA SYSTEM...patterns). An approximation which is valid under the same conditions is 18 P e,1 A PL + k (k+P 0 )(K-I)(1’_)K-2 ( 35 ) This approximation is very accurate...and approximation given in (32)-( 35 ) can be applied to any particular nonselective fading channel by substituting the appropriate expression for P0

  2. A Bayesian Approach to Identifying Structural Nonlinearity using Free-Decay Response: Application to Damage Detection in Composites

    DTIC Science & Technology

    2010-03-03

    obtainable while for the free-decay problem we simply have to include the initial conditions as random variables to be predicted. A different approach that...important and useful properties of MLEs is that, under regularity conditions , they are asymptotically unbiased and possess the minimum possible...becomes pLðzjh;s2G;MiÞ (i.e. the likelihood is conditional on the specified model). However, in this work we will only consider a single model and drop the

  3. Variability of particle number emissions from diesel and hybrid diesel-electric buses in real driving conditions.

    PubMed

    Sonntag, Darrell B; Gao, H Oliver; Holmén, Britt A

    2008-08-01

    A linear mixed model was developed to quantify the variability of particle number emissions from transit buses tested in real-world driving conditions. Two conventional diesel buses and two hybrid diesel-electric buses were tested throughout 2004 under different aftertreatments, fuels, drivers, and bus routes. The mixed model controlled the confounding influence of factors inherent to on-board testing. Statistical tests showed that particle number emissions varied significantly according to the after treatment, bus route, driver, bus type, and daily temperature, with only minor variability attributable to differences between fuel types. The daily setup and operation of the sampling equipment (electrical low pressure impactor) and mini-dilution system contributed to 30-84% of the total random variability of particle measurements among tests with diesel oxidation catalysts. By controlling for the sampling day variability, the model better defined the differences in particle emissions among bus routes. In contrast, the low particle number emissions measured with diesel particle filters (decreased by over 99%) did not vary according to operating conditions or bus type but did vary substantially with ambient temperature.

  4. Self-organization of head-centered visual responses under ecological training conditions.

    PubMed

    Mender, Bedeho M W; Stringer, Simon M

    2014-01-01

    We have studied the development of head-centered visual responses in an unsupervised self-organizing neural network model which was trained under ecological training conditions. Four independent spatio-temporal characteristics of the training stimuli were explored to investigate the feasibility of the self-organization under more ecological conditions. First, the number of head-centered visual training locations was varied over a broad range. Model performance improved as the number of training locations approached the continuous sampling of head-centered space. Second, the model depended on periods of time where visual targets remained stationary in head-centered space while it performed saccades around the scene, and the severity of this constraint was explored by introducing increasing levels of random eye movement and stimulus dynamics. Model performance was robust over a range of randomization. Third, the model was trained on visual scenes where multiple simultaneous targets where always visible. Model self-organization was successful, despite never being exposed to a visual target in isolation. Fourth, the duration of fixations during training were made stochastic. With suitable changes to the learning rule, it self-organized successfully. These findings suggest that the fundamental learning mechanism upon which the model rests is robust to the many forms of stimulus variability under ecological training conditions.

  5. Durability reliability analysis for corroding concrete structures under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Hao

    2018-02-01

    This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.

  6. Nuclear Autonomy in Multinucleate Fungi

    PubMed Central

    Roberts, Samantha E.; Gladfelter, Amy S.

    2015-01-01

    Within many fungal syncytia, nuclei behave independently despite sharing a common cytoplasm. Creation of independent nuclear zones of control in one cell is paradoxical considering random protein synthesis sites, predicted rapid diffusion rates, and well-mixed cytosol. In studying the surprising fungal nuclear autonomy, new principles of cellular organization are emerging. We discuss the current understanding of nuclear autonomy, focusing on asynchronous cell cycle progression where most work has been directed. Mechanisms underlying nuclear autonomy are diverse including mRNA localization, ploidy variability, and nuclear spacing control. With the challenges fungal syncytia face due to cytoplasmic size and shape, they serve as powerful models for uncovering new subcellular organization modes, variability sources among isogenic uninucleate cells, and the evolution of multicellularity. PMID:26379197

  7. Genetic variability of garlic accessions as revealed by agro-morphological traits evaluated under different environments.

    PubMed

    Hoogerheide, E S S; Azevedo Filho, J A; Vencovsky, R; Zucchi, M I; Zago, B W; Pinheiro, J B

    2017-05-31

    The cultivated garlic (Allium sativum L.) displays a wide phenotypic diversity, which is derived from natural mutations and phenotypic plasticity, due to dependence on soil type, moisture, latitude, altitude and cultural practices, leading to a large number of cultivars. This study aimed to evaluate the genetic variability shown by 63 garlic accessions belonging to Instituto Agronômico de Campinas and the Escola Superior de Agricultura "Luiz de Queiroz" germplasm collections. We evaluated ten quantitative characters in experimental trials conducted under two localities of the State of São Paulo: Monte Alegre do Sul and Piracicaba, during the agricultural year of 2007, in a randomized blocks design with five replications. The Mahalanobis distance was used to measure genetic dissimilarities. The UPGMA method and Tocher's method were used as clustering procedures. Results indicated significant variation among accessions (P < 0.01) for all evaluated characters, except for the percentage of secondary bulb growth in MAS, indicating the existence of genetic variation for bulb production, and germplasm evaluation considering different environments is more reliable for the characterization of the genotypic variability among garlic accessions, since it diminishes the environmental effects in the clustering of genotypes.

  8. Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi

    2015-07-01

    The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.

  9. Multicollinearity in canonical correlation analysis in maize.

    PubMed

    Alves, B M; Cargnelutti Filho, A; Burin, C

    2017-03-30

    The objective of this study was to evaluate the effects of multicollinearity under two methods of canonical correlation analysis (with and without elimination of variables) in maize (Zea mays L.) crop. Seventy-six maize genotypes were evaluated in three experiments, conducted in a randomized block design with three replications, during the 2009/2010 crop season. Eleven agronomic variables (number of days from sowing until female flowering, number of days from sowing until male flowering, plant height, ear insertion height, ear placement, number of plants, number of ears, ear index, ear weight, grain yield, and one thousand grain weight), 12 protein-nutritional variables (crude protein, lysine, methionine, cysteine, threonine, tryptophan, valine, isoleucine, leucine, phenylalanine, histidine, and arginine), and 6 energetic-nutritional variables (apparent metabolizable energy, apparent metabolizable energy corrected for nitrogen, ether extract, crude fiber, starch, and amylose) were measured. A phenotypic correlation matrix was first generated among the 29 variables for each of the experiments. A multicollinearity diagnosis was later performed within each group of variables using methodologies such as variance inflation factor and condition number. Canonical correlation analysis was then performed, with and without the elimination of variables, among groups of agronomic and protein-nutritional, and agronomic and energetic-nutritional variables. The canonical correlation analysis in the presence of multicollinearity (without elimination of variables) overestimates the variability of canonical coefficients. The elimination of variables is an efficient method to circumvent multicollinearity in canonical correlation analysis.

  10. Missing Data and Multiple Imputation: An Unbiased Approach

    NASA Technical Reports Server (NTRS)

    Foy, M.; VanBaalen, M.; Wear, M.; Mendez, C.; Mason, S.; Meyers, V.; Alexander, D.; Law, J.

    2014-01-01

    The default method of dealing with missing data in statistical analyses is to only use the complete observations (complete case analysis), which can lead to unexpected bias when data do not meet the assumption of missing completely at random (MCAR). For the assumption of MCAR to be met, missingness cannot be related to either the observed or unobserved variables. A less stringent assumption, missing at random (MAR), requires that missingness not be associated with the value of the missing variable itself, but can be associated with the other observed variables. When data are truly MAR as opposed to MCAR, the default complete case analysis method can lead to biased results. There are statistical options available to adjust for data that are MAR, including multiple imputation (MI) which is consistent and efficient at estimating effects. Multiple imputation uses informing variables to determine statistical distributions for each piece of missing data. Then multiple datasets are created by randomly drawing on the distributions for each piece of missing data. Since MI is efficient, only a limited number, usually less than 20, of imputed datasets are required to get stable estimates. Each imputed dataset is analyzed using standard statistical techniques, and then results are combined to get overall estimates of effect. A simulation study will be demonstrated to show the results of using the default complete case analysis, and MI in a linear regression of MCAR and MAR simulated data. Further, MI was successfully applied to the association study of CO2 levels and headaches when initial analysis showed there may be an underlying association between missing CO2 levels and reported headaches. Through MI, we were able to show that there is a strong association between average CO2 levels and the risk of headaches. Each unit increase in CO2 (mmHg) resulted in a doubling in the odds of reported headaches.

  11. Phylogenetic comparative methods complement discriminant function analysis in ecomorphology.

    PubMed

    Barr, W Andrew; Scott, Robert S

    2014-04-01

    In ecomorphology, Discriminant Function Analysis (DFA) has been used as evidence for the presence of functional links between morphometric variables and ecological categories. Here we conduct simulations of characters containing phylogenetic signal to explore the performance of DFA under a variety of conditions. Characters were simulated using a phylogeny of extant antelope species from known habitats. Characters were modeled with no biomechanical relationship to the habitat category; the only sources of variation were body mass, phylogenetic signal, or random "noise." DFA on the discriminability of habitat categories was performed using subsets of the simulated characters, and Phylogenetic Generalized Least Squares (PGLS) was performed for each character. Analyses were repeated with randomized habitat assignments. When simulated characters lacked phylogenetic signal and/or habitat assignments were random, <5.6% of DFAs and <8.26% of PGLS analyses were significant. When characters contained phylogenetic signal and actual habitats were used, 33.27 to 45.07% of DFAs and <13.09% of PGLS analyses were significant. False Discovery Rate (FDR) corrections for multiple PGLS analyses reduced the rate of significance to <4.64%. In all cases using actual habitats and characters with phylogenetic signal, correct classification rates of DFAs exceeded random chance. In simulations involving phylogenetic signal in both predictor variables and predicted categories, PGLS with FDR was rarely significant, while DFA often was. In short, DFA offered no indication that differences between categories might be explained by phylogenetic signal, while PGLS did. As such, PGLS provides a valuable tool for testing the functional hypotheses at the heart of ecomorphology. Copyright © 2013 Wiley Periodicals, Inc.

  12. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  13. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  14. Probabilistic SSME blades structural response under random pulse loading

    NASA Technical Reports Server (NTRS)

    Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.

    1987-01-01

    The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.

  15. A reliability-based cost effective fail-safe design procedure

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1976-01-01

    The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.

  16. Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouri, Drew Philip; Surowiec, Thomas M.

    Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less

  17. Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization

    DOE PAGES

    Kouri, Drew Philip; Surowiec, Thomas M.

    2018-06-05

    Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less

  18. Exploiting Data Missingness in Bayesian Network Modeling

    NASA Astrophysics Data System (ADS)

    Rodrigues de Morais, Sérgio; Aussem, Alex

    This paper proposes a framework built on the use of Bayesian networks (BN) for representing statistical dependencies between the existing random variables and additional dummy boolean variables, which represent the presence/absence of the respective random variable value. We show how augmenting the BN with these additional variables helps pinpoint the mechanism through which missing data contributes to the classification task. The missing data mechanism is thus explicitly taken into account to predict the class variable using the data at hand. Extensive experiments on synthetic and real-world incomplete data sets reveals that the missingness information improves classification accuracy.

  19. Inverse Ising problem in continuous time: A latent variable approach

    NASA Astrophysics Data System (ADS)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  20. A generalized correlation of experimental flat-plate collector performance. [solar collectors, performance tests, energy policy

    NASA Technical Reports Server (NTRS)

    Simon, F. F.; Miller, D. R.

    1975-01-01

    A generalized collector performance correlation was derived and shown by experimental verification to be of the proper form to account for the majority of the variable conditions encountered both in outdoor and in indoor collector tests. This correlation permits a determination of collector parameters which are essentially nonvarying under conditions which do vary randomly (outdoors) or conditions which vary in a controlled manner (indoors - simulator). It was shown that correlation of the experimental performance of collectors allows the following: (1) comparisons of different collector designs; (2) collector performance prediction under conditions that differ from the conditions of the test program; and (3) monitoring performance degradation effects.

  1. Mendelian randomization analysis of a time-varying exposure for binary disease outcomes using functional data analysis methods.

    PubMed

    Cao, Ying; Rajan, Suja S; Wei, Peng

    2016-12-01

    A Mendelian randomization (MR) analysis is performed to analyze the causal effect of an exposure variable on a disease outcome in observational studies, by using genetic variants that affect the disease outcome only through the exposure variable. This method has recently gained popularity among epidemiologists given the success of genetic association studies. Many exposure variables of interest in epidemiological studies are time varying, for example, body mass index (BMI). Although longitudinal data have been collected in many cohort studies, current MR studies only use one measurement of a time-varying exposure variable, which cannot adequately capture the long-term time-varying information. We propose using the functional principal component analysis method to recover the underlying individual trajectory of the time-varying exposure from the sparsely and irregularly observed longitudinal data, and then conduct MR analysis using the recovered curves. We further propose two MR analysis methods. The first assumes a cumulative effect of the time-varying exposure variable on the disease risk, while the second assumes a time-varying genetic effect and employs functional regression models. We focus on statistical testing for a causal effect. Our simulation studies mimicking the real data show that the proposed functional data analysis based methods incorporating longitudinal data have substantial power gains compared to standard MR analysis using only one measurement. We used the Framingham Heart Study data to demonstrate the promising performance of the new methods as well as inconsistent results produced by the standard MR analysis that relies on a single measurement of the exposure at some arbitrary time point. © 2016 WILEY PERIODICALS, INC.

  2. Stress and reliability analyses of multilayered composite cylinder under thermal and mechanical loads

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua

    The coupling resulting from the mutual influence of material thermal and mechanical parameters is examined in the thermal stress analysis of a multilayered isotropic composite cylinder subjected to sudden axisymmetric external and internal temperature. The method of complex frequency response functions together with the Fourier transform technique is utilized. Because the coupling parameters for some composite materials, such as carbon-carbon, are very small, the effect of coupling is neglected in the orthotropic thermal stress analysis. The stress distributions in multilayered orthotropic cylinders subjected to sudden axisymmetric temperature loading combined with dynamic pressure as well as asymmetric temperature loading are also obtained. The method of Fourier series together with the Laplace transform is utilized in solving the heat conduction equation and thermal stress analysis. For brittle materials, like carbon-carbon composites, the strength variability is represented by two or three parameter Weibull distributions. The 'weakest link' principle which takes into account both the carbon-carbon composite cylinders. The complex frequency response analysis is performed on a multilayered orthotropic cylinder under asymmetrical thermal load. Both deterministic and random thermal stress and reliability analyses can be based on the results of this frequency response analysis. The stress and displacement distributions and reliability of rocket motors under static or dynamic line loads are analyzed by an elasticity approach. Rocket motors are modeled as long hollow multilayered cylinders with an air core, a thick isotropic propellant inner layer and a thin orthotropic kevlar-epoxy case. The case is treated as a single orthotropic layer or a ten layered orthotropic structure. Five material properties and the load are treated as random variable with normal distributions when the reliability of the rocket motor is analyzed by the first-order, second-moment method (FOSM).

  3. Nonrecurrence and Bell-like inequalities

    NASA Astrophysics Data System (ADS)

    Danforth, Douglas G.

    2017-12-01

    The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.

  4. Perturbed effects at radiation physics

    NASA Astrophysics Data System (ADS)

    Külahcı, Fatih; Şen, Zekâi

    2013-09-01

    Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.

  5. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  6. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  7. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  8. Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions

    NASA Astrophysics Data System (ADS)

    Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia

    2018-03-01

    Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oikari, A.O.J.

    Relevance of the choice of a test organism intended to be representative for a given environment seems to be under continual debate in aquatic ecotoxicology. For instance, it is commonly argue that acute toxicity tests with rainbow trout, the species most often recommended as a standard cold water teleost, were not representative for Nordic countries because the species is an alien in local faunas. A comparative study with several freshwater species was therefore initiated to clarify the validity of this assumption. As a first approximation, standard LC 50 assays were conducted. The species used were chosen only on the basismore » of their local availability, i.e, they randomly represented the fish fauna of Nordic inland waters. Furthermore, inter-species variation of toxicity response was compared with certain other, quantitatively more important, intra-species sources of variability affecting the toxicity of chemicals. Use of reference toxicants has been recommended as a means of standardizing bioassays. Compounds, characteristic of effluents from the pulp and paper industry, were selected for the present study. The toxicity of organic acids such a phenols and resin acids, as well as that of pupmill effluents, strongly depends on water pH. Because of the possibility that species differences could exist in this respect, effects of water acidity on toxicity of these types of substances to a randomly selected local species was investigated. Finally, as an example of the biological source of assay variability, the effect of yolk absorption was studied with a subsequent crisis period due to moderate starvation under laboratory conditions.« less

  10. Benford's law and continuous dependent random variables

    NASA Astrophysics Data System (ADS)

    Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine

    2018-01-01

    Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.

  11. Random glucose is useful for individual prediction of type 2 diabetes: results of the Study of Health in Pomerania (SHIP).

    PubMed

    Kowall, Bernd; Rathmann, Wolfgang; Giani, Guido; Schipf, Sabine; Baumeister, Sebastian; Wallaschofski, Henri; Nauck, Matthias; Völzke, Henry

    2013-04-01

    Random glucose is widely used in routine clinical practice. We investigated whether this non-standardized glycemic measure is useful for individual diabetes prediction. The Study of Health in Pomerania (SHIP), a population-based cohort study in north-east Germany, included 3107 diabetes-free persons aged 31-81 years at baseline in 1997-2001. 2475 persons participated at 5-year follow-up and gave self-reports of incident diabetes. For the total sample and for subjects aged ≥50 years, statistical properties of prediction models with and without random glucose were compared. A basic model (including age, sex, diabetes of parents, hypertension and waist circumference) and a comprehensive model (additionally including various lifestyle variables and blood parameters, but not HbA1c) performed statistically significantly better after adding random glucose (e.g., the area under the receiver-operating curve (AROC) increased from 0.824 to 0.856 after adding random glucose to the comprehensive model in the total sample). Likewise, adding random glucose to prediction models which included HbA1c led to significant improvements of predictive ability (e.g., for subjects ≥50 years, AROC increased from 0.824 to 0.849 after adding random glucose to the comprehensive model+HbA1c). Random glucose is useful for individual diabetes prediction, and improves prediction models including HbA1c. Copyright © 2012 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  12. Comparison of the variability of the onset and recovery from neuromuscular blockade with cisatracurium versus rocuronium in elderly patients under total intravenous anesthesia

    PubMed Central

    Xiaobo, Feng; Jianjuan, Ke; Yanlin, Wang

    2012-01-01

    This study was designed to compare the variability of the onset and offset of the effect of two neuromuscular blocking drugs with different elimination pathways in adult and elderly patients during total intravenous anesthesia (TIVA). After Ethics Committee approval and patients' informed consent, the drugs were compared in 40 adult and 40 elderly patients scheduled for elective surgery under TIVA with tracheal intubation who were randomized to receive a single bolus dose of 0.15 mg/kg cisatracurium or 0.9 mg/kg rocuronium. The time of onset of maximum depression, duration of action, and recovery index time were measured and recorded for each patient and variability is reported as means ± standard deviation. Time of onset was significantly shorter for rocuronium than cisatracurium for the adult and elderly groups (P = 0.000), but the variability of cisatracurium was significantly greater compared with rocuronium for the same age groups (93.25 vs 37.01 s in the adult group and 64.56 vs 33.75 s in the elderly group; P = 0.000). The duration of the effect in the elderly group receiving rocuronium was significantly longer than in the elderly group receiving cisatracurium, and the variability of the duration was significantly greater in the rocuronium group than in the cisatracurium group. Mean time of recovery was significantly longer for the elderly group receiving rocuronium than for the elderly group receiving cisatracurium (P = 0.022), and variability was also greater (P = 0.002). Both drugs favored good intubating conditions. In conclusion, cisatracurium showed less variability in these parameters than rocuronium, especially in the elderly, a fact that may be of particular clinical interest. PMID:22584638

  13. Application of artificial neural networks to establish a predictive mortality risk model in children admitted to a paediatric intensive care unit.

    PubMed

    Chan, C H; Chan, E Y; Ng, D K; Chow, P Y; Kwok, K L

    2006-11-01

    Paediatric risk of mortality and paediatric index of mortality (PIM) are the commonly-used mortality prediction models (MPM) in children admitted to paediatric intensive care unit (PICU). The current study was undertaken to develop a better MPM using artificial neural network, a domain of artificial intelligence. The purpose of this retrospective case series was to compare an artificial neural network (ANN) model and PIM with the observed mortality in a cohort of patients admitted to a five-bed PICU in a Hong Kong non-teaching general hospital. The patients were under the age of 17 years and admitted to our PICU from April 2001 to December 2004. Data were collected from each patient admitted to our PICU. All data were randomly allocated to either the training or validation set. The data from the training set were used to construct a series of ANN models. The data from the validation set were used to validate the ANN and PIM models. The accuracy of ANN models and PIM was assessed by area under the receiver operator characteristics (ROC) curve and calibration. All data were randomly allocated to either the training (n=274) or validation set (n=273). Three ANN models were developed using the data from the training set, namely ANN8 (trained with variables required for PIM), ANN9 (trained with variables required for PIM and pre-ICU intubation) and ANN23 (trained with variables required for ANN9 and 14 principal ICU diagnoses). Three ANN models and PIM were used to predict mortality in the validation set. We found that PIM and ANN9 had a high ROC curve (PIM: 0.808, 95 percent confidence interval 0.552 to 1.000, ANN9: 0.957, 95 percent confidence interval 0.915 to 1.000), whereas ANN8 and ANN23 gave a suboptimal area under the ROC curve. ANN8 required only five variables for the calculation of risk, compared with eight for PIM. The current study demonstrated the process of predictive mortality risk model development using ANN. Further multicentre studies are required to produce a representative ANN-based mortality prediction model for use in different PICUs.

  14. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  15. A Combined Cognitive Stimulation and Physical Exercise Programme (MINDVital) in Early Dementia: Differential Effects on Single- and Dual-Task Gait Performance.

    PubMed

    Tay, Laura; Lim, Wee Shiong; Chan, Mark; Ali, Noorhazlina; Chong, Mei Sian

    2016-01-01

    Gait disorders are common in early dementia, with particularly pronounced dual-task deficits, contributing to the increased fall risk and mobility decline associated with cognitive impairment. This study examines the effects of a combined cognitive stimulation and physical exercise programme (MINDVital) on gait performance under single- and dual-task conditions in older adults with mild dementia. Thirty-nine patients with early dementia participated in a multi-disciplinary rehabilitation programme comprising both physical exercise and cognitive stimulation. The programme was conducted in 8-week cycles with participants attending once weekly, and all participants completed 2 successive cycles. Cognitive, functional performance and behavioural symptoms were assessed at baseline and at the end of each 8-week cycle. Gait speed was examined under both single- (Timed Up and Go and 6-metre walk tests) and dual-task (animal category and serial counting) conditions. A random effects model was performed for the independent effect of MINDVital on the primary outcome variable of gait speed under dual-task conditions. The mean age of patients enroled in the rehabilitation programme was 79 ± 6.2 years; 25 (64.1%) had a diagnosis of Alzheimer's dementia, and 26 (66.7%) were receiving a cognitive enhancer therapy. There was a significant improvement in cognitive performance [random effects coefficient (standard error) = 0.90 (0.31), p = 0.003] and gait speed under both dual-task situations [animal category: random effects coefficient = 0.04 (0.02), p = 0.039; serial counting: random effects coefficient = 0.05 (0.02), p = 0.013], with reduced dual-task cost for gait speed [serial counting: random effects coefficient = -4.05 (2.35), p = 0.086] following successive MINDVital cycles. No significant improvement in single-task gait speed was observed. Improved cognitive performance over time was a significant determinant of changes in dual-task gait speed [random effects coefficients = 0.01 (0.005), p = 0.048, and 0.02 (0.005), p = 0.003 for category fluency and counting backwards, respectively]. A combined physical and cognitive rehabilitation programme leads to significant improvements in dual-task walking in early dementia, which may be contributed by improvement in cognitive performance, as single-task gait performance remained stable. © 2016 S. Karger AG, Basel.

  16. The living Drake equation of the Tau Zero Foundation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-03-01

    The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.

  17. Automated retrieval of forest structure variables based on multi-scale texture analysis of VHR satellite imagery

    NASA Astrophysics Data System (ADS)

    Beguet, Benoit; Guyon, Dominique; Boukir, Samia; Chehata, Nesrine

    2014-10-01

    The main goal of this study is to design a method to describe the structure of forest stands from Very High Resolution satellite imagery, relying on some typical variables such as crown diameter, tree height, trunk diameter, tree density and tree spacing. The emphasis is placed on the automatization of the process of identification of the most relevant image features for the forest structure retrieval task, exploiting both spectral and spatial information. Our approach is based on linear regressions between the forest structure variables to be estimated and various spectral and Haralick's texture features. The main drawback of this well-known texture representation is the underlying parameters which are extremely difficult to set due to the spatial complexity of the forest structure. To tackle this major issue, an automated feature selection process is proposed which is based on statistical modeling, exploring a wide range of parameter values. It provides texture measures of diverse spatial parameters hence implicitly inducing a multi-scale texture analysis. A new feature selection technique, we called Random PRiF, is proposed. It relies on random sampling in feature space, carefully addresses the multicollinearity issue in multiple-linear regression while ensuring accurate prediction of forest variables. Our automated forest variable estimation scheme was tested on Quickbird and Pléiades panchromatic and multispectral images, acquired at different periods on the maritime pine stands of two sites in South-Western France. It outperforms two well-established variable subset selection techniques. It has been successfully applied to identify the best texture features in modeling the five considered forest structure variables. The RMSE of all predicted forest variables is improved by combining multispectral and panchromatic texture features, with various parameterizations, highlighting the potential of a multi-resolution approach for retrieving forest structure variables from VHR satellite images. Thus an average prediction error of ˜ 1.1 m is expected on crown diameter, ˜ 0.9 m on tree spacing, ˜ 3 m on height and ˜ 0.06 m on diameter at breast height.

  18. Predicting network modules of cell cycle regulators using relative protein abundance statistics.

    PubMed

    Oguz, Cihan; Watson, Layne T; Baumann, William T; Tyson, John J

    2017-02-28

    Parameter estimation in systems biology is typically done by enforcing experimental observations through an objective function as the parameter space of a model is explored by numerical simulations. Past studies have shown that one usually finds a set of "feasible" parameter vectors that fit the available experimental data equally well, and that these alternative vectors can make different predictions under novel experimental conditions. In this study, we characterize the feasible region of a complex model of the budding yeast cell cycle under a large set of discrete experimental constraints in order to test whether the statistical features of relative protein abundance predictions are influenced by the topology of the cell cycle regulatory network. Using differential evolution, we generate an ensemble of feasible parameter vectors that reproduce the phenotypes (viable or inviable) of wild-type yeast cells and 110 mutant strains. We use this ensemble to predict the phenotypes of 129 mutant strains for which experimental data is not available. We identify 86 novel mutants that are predicted to be viable and then rank the cell cycle proteins in terms of their contributions to cumulative variability of relative protein abundance predictions. Proteins involved in "regulation of cell size" and "regulation of G1/S transition" contribute most to predictive variability, whereas proteins involved in "positive regulation of transcription involved in exit from mitosis," "mitotic spindle assembly checkpoint" and "negative regulation of cyclin-dependent protein kinase by cyclin degradation" contribute the least. These results suggest that the statistics of these predictions may be generating patterns specific to individual network modules (START, S/G2/M, and EXIT). To test this hypothesis, we develop random forest models for predicting the network modules of cell cycle regulators using relative abundance statistics as model inputs. Predictive performance is assessed by the areas under receiver operating characteristics curves (AUC). Our models generate an AUC range of 0.83-0.87 as opposed to randomized models with AUC values around 0.50. By using differential evolution and random forest modeling, we show that the model prediction statistics generate distinct network module-specific patterns within the cell cycle network.

  19. Effectiveness of yoga program in the management of diabetes using community health workers in the urban slums of Bangalore city: A non-randomized controlled trial

    PubMed Central

    Dasappa, Hemavathi; Fathima, Farah Naaz; Prabhakar, Rugmani

    2016-01-01

    Trial Design: Nonrandomized controlled trial. Methods: Nonrandomized controlled trial. This was an interventional study that was conducted in 4 slums of Bengaluru. Of the 256 diabetes participants, only 109 people agreed to participate in the program. Of 109 people, 52 people agreed to participate in the intervention (agreed to learn and practice Yoga) while the remaining 57 people were assigned to nonintervention group. Randomization and blinding could not be done. Objective and Outcome: The study was conducted with objective of assessing the effectiveness of Yoga, Pranayama, and Sudarshan Kriya in the community-based management of diabetes mellitus. The primary outcome variable was Hb1Ac and secondary outcome variables were systolic blood pressure (SBP), diastolic blood pressure (DBP), adherence to medication, and changes in lifestyle. Results: The study was conducted for 40 days. Community health workers made a total of 6 visits during the study. All the 109 participants were available for weekly follow-up. There were no drop outs among the study population. Statistically significant change was seen in the consumption of vegetable (χ2 = 15.326, P < 0.005), fruits (χ2 = 16.207, P < 0.005), salty food (χ2 = 14.823, P < 0.005), bakery food (χ2 = 10.429, P < 0.005) and fried food (χ2 = 15.470, P < 0.005), adherence to metformin (χ2 = 41.780, P < 0.005) and other medication(χ2 = 21.871, P < 0.005) and proportion of patients with DBP under control (χ2 = 9.396, P < 0.005) and proportion of people with glucose random blood sugar under control (χ2 = 29.693, P < 0.005) between the two groups following the intervention. Statistically significant change was also seen in the proportion of people with SBP/DBP ≤140/90 (χ2 = 10.635, P < 0.005) between the two groups. Conclusion: The Yoga program was successful in improving dietary practices and medication adherence and in increasing the proportion of diabetics and hypertensive patients under control. PMID:28217594

  20. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  1. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  2. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  3. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  4. Correlated resistive/capacitive state variability in solid TiO2 based memory devices

    NASA Astrophysics Data System (ADS)

    Li, Qingjiang; Salaoru, Iulia; Khiat, Ali; Xu, Hui; Prodromakis, Themistoklis

    2017-05-01

    In this work, we experimentally demonstrated the correlated resistive/capacitive switching and state variability in practical TiO2 based memory devices. Based on filamentary functional mechanism, we argue that the impedance state variability stems from the randomly distributed defects inside the oxide bulk. Finally, our assumption was verified via a current percolation circuit model, by taking into account of random defects distribution and coexistence of memristor and memcapacitor.

  5. Algebraic Functions of H-Functions with Specific Dependency Structure.

    DTIC Science & Technology

    1984-05-01

    a study of its characteristic function. Such analysis is reproduced in books by Springer (17), Anderson (23), Feller (34,35), Mood and Graybill (52...following linearity property for expectations of jointly distributed random variables is derived. r 1 Theorem 1.1: If X and Y are real random variables...appear in American Journal of Mathematical and Management Science. 13. Mathai, A.M., and R.K. Saxena, "On linear combinations of stochastic variables

  6. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  7. Is there any association between parental education and child mortality? A study in a rural area of Bangladesh.

    PubMed

    Akter, T; Hoque, D M E; Chowdhury, E K; Rahman, M; Russell, M; Arifeen, S E

    2015-12-01

    To assess the association between parental education and under-five mortality, using the Integrated Management of Childhood Illness (IMCI) data from rural Bangladesh. It also investigated whether the association of parental education with under-five mortality had changed over time. This study was nested in the IMCI cluster randomized controlled trial. Participants considered for the analysis were all children aged under five years from the baseline (1995-2000) and the final (2002-2007) IMCI household survey. The analysis sample included 39,875 and 38,544 live births from the baseline and the final survey respectively. The outcome variable was under-five mortality and the exposure variables were mother's and father's education. Data were analysed with logistic regression. In 2002-2007, the odds of the under-five mortality were 38% lower for the children with mother having secondary education, compared to the children with uneducated mother. For similar educational differences for fathers, at the same time period, the odds of the under-five mortality were 16% lower. The association of mother's education with under-five mortality was significantly stronger in 2002-2007 compared to 1995-2000. Mother's education appears to have a strong and significant association with under-five mortality, compared to father's education. The association of mother's education with under-five mortality appears to have increased over time. Our findings indicate that investing on girls' education is a good strategy to combat infant mortality in developing countries. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  8. On the distribution of a product of N Gaussian random variables

    NASA Astrophysics Data System (ADS)

    Stojanac, Željka; Suess, Daniel; Kliesch, Martin

    2017-08-01

    The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.

  9. Random parameter models for accident prediction on two-lane undivided highways in India.

    PubMed

    Dinu, R R; Veeraragavan, A

    2011-02-01

    Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Data-Enabled Quantification of Aluminum Microstructural Damage Under Tensile Loading

    NASA Astrophysics Data System (ADS)

    Wayne, Steven F.; Qi, G.; Zhang, L.

    2016-08-01

    The study of material failure with digital analytics is in its infancy and offers a new perspective to advance our understanding of damage initiation and evolution in metals. In this article, we study the failure of aluminum using data-enabled methods, statistics and data mining. Through the use of tension tests, we establish a multivariate acoustic-data matrix of random damage events, which typically are not visible and are very difficult to measure due to their variability, diversity and interactivity during damage processes. Aluminium alloy 6061-T651 and single crystal aluminium with a (111) orientation were evaluated by comparing the collection of acoustic signals from damage events caused primarily by slip in the single crystal and multimode fracture of the alloy. We found the resulting acoustic damage-event data to be large semi-structured volumes of Big Data with the potential to be mined for information that describes the materials damage state under strain. Our data-enabled analyses has allowed us to determine statistical distributions of multiscale random damage that provide a means to quantify the material damage state.

  11. Random dopant fluctuations and statistical variability in n-channel junctionless FETs

    NASA Astrophysics Data System (ADS)

    Akhavan, N. D.; Umana-Membreno, G. A.; Gu, R.; Antoszewski, J.; Faraone, L.

    2018-01-01

    The influence of random dopant fluctuations on the statistical variability of the electrical characteristics of n-channel silicon junctionless nanowire transistor (JNT) has been studied using three dimensional quantum simulations based on the non-equilibrium Green’s function (NEGF) formalism. Average randomly distributed body doping densities of 2 × 1019, 6 × 1019 and 1 × 1020 cm-3 have been considered employing an atomistic model for JNTs with gate lengths of 5, 10 and 15 nm. We demonstrate that by properly adjusting the doping density in the JNT, a near ideal statistical variability and electrical performance can be achieved, which can pave the way for the continuation of scaling in silicon CMOS technology.

  12. The contribution of simple random sampling to observed variations in faecal egg counts.

    PubMed

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Nonverbal behavior correlated with the shaped verbal behavior of children

    PubMed Central

    Catania, A. Charles; Lowe, C. Fergus; Horne, Pauline

    1990-01-01

    Children under 6 years old pressed on response windows behind which stimuli appeared (star or tree). Presses occasionally lit lamps arranged in a column; a present was delivered when all lamps were lit. A random-ratio schedule in the presence of star alternated with a random-interval schedule in the presence of tree. These contingencies usually did not produce respective high and low response rates in the presence of star and tree, but the shaping of verbal behavior (e.g., “press a lot without stopping” or “press and wait”) was sometimes accompanied by corresponding changes in response rate. Verbal shaping was accomplished between schedule components during verbal interactions between the child and a hand-puppet, Garfield the Cat, and used social consequences such as enthusiastic reactions to what the child had said as well as concrete consequences such as delivery of extra presents. Variables that may constrain the shaping of verbal behavior in children seem to include the vocabulary available to the child and the functional properties of that vocabulary; the correlation between rates of pressing and what the child says about them may depend upon such variables. ImagesFig. 2 PMID:22477603

  14. Summer and winter habitat suitability of Marco Polo argali in southeastern Tajikistan: A modeling approach.

    PubMed

    Salas, Eric Ariel L; Valdez, Raul; Michel, Stefan

    2017-11-01

    We modeled summer and winter habitat suitability of Marco Polo argali in the Pamir Mountains in southeastern Tajikistan using these statistical algorithms: Generalized Linear Model, Random Forest, Boosted Regression Tree, Maxent, and Multivariate Adaptive Regression Splines. Using sheep occurrence data collected from 2009 to 2015 and a set of selected habitat predictors, we produced summer and winter habitat suitability maps and determined the important habitat suitability predictors for both seasons. Our results demonstrated that argali selected proximity to riparian areas and greenness as the two most relevant variables for summer, and the degree of slope (gentler slopes between 0° to 20°) and Landsat temperature band for winter. The terrain roughness was also among the most important variables in summer and winter models. Aspect was only significant for winter habitat, with argali preferring south-facing mountain slopes. We evaluated various measures of model performance such as the Area Under the Curve (AUC) and the True Skill Statistic (TSS). Comparing the five algorithms, the AUC scored highest for Boosted Regression Tree in summer (AUC = 0.94) and winter model runs (AUC = 0.94). In contrast, Random Forest underperformed in both model runs.

  15. Standard errors and confidence intervals for variable importance in random forest regression, classification, and survival.

    PubMed

    Ishwaran, Hemant; Lu, Min

    2018-06-04

    Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  17. Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm

    PubMed Central

    Bossard, Jeremy A.; Lin, Lan; Werner, Douglas H.

    2016-01-01

    Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as ‘chaotic’, but we propose that apparent ‘chaotic’ natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too ‘perfect’ to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the ‘chaotic’ (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and ‘chaotic’ superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime. PMID:26763335

  18. Measures of Residual Risk with Connections to Regression, Risk Tracking, Surrogate Models, and Ambiguity

    DTIC Science & Technology

    2015-04-15

    manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original... manage , predict and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved situation faced

  19. Sums and Products of Jointly Distributed Random Variables: A Simplified Approach

    ERIC Educational Resources Information Center

    Stein, Sheldon H.

    2005-01-01

    Three basic theorems concerning expected values and variances of sums and products of random variables play an important role in mathematical statistics and its applications in education, business, the social sciences, and the natural sciences. A solid understanding of these theorems requires that students be familiar with the proofs of these…

  20. A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.

    ERIC Educational Resources Information Center

    Kraemer, Helena Chmura; Thiemann, Sue

    1989-01-01

    Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…

  1. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    ERIC Educational Resources Information Center

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  2. Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2005-11-01

    We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.

  3. Analysis of backward error recovery for concurrent processes with recovery blocks

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1982-01-01

    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.

  4. Random Effects: Variance Is the Spice of Life.

    PubMed

    Jupiter, Daniel C

    Covariates in regression analyses allow us to understand how independent variables of interest impact our dependent outcome variable. Often, we consider fixed effects covariates (e.g., gender or diabetes status) for which we examine subjects at each value of the covariate. We examine both men and women and, within each gender, examine both diabetic and nondiabetic patients. Occasionally, however, we consider random effects covariates for which we do not examine subjects at every value. For example, we examine patients from only a sample of hospitals and, within each hospital, examine both diabetic and nondiabetic patients. The random sampling of hospitals is in contrast to the complete coverage of all genders. In this column I explore the differences in meaning and analysis when thinking about fixed and random effects variables. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  5. Optimal control in microgrid using multi-agent reinforcement learning.

    PubMed

    Li, Fu-Dong; Wu, Min; He, Yong; Chen, Xin

    2012-11-01

    This paper presents an improved reinforcement learning method to minimize electricity costs on the premise of satisfying the power balance and generation limit of units in a microgrid with grid-connected mode. Firstly, the microgrid control requirements are analyzed and the objective function of optimal control for microgrid is proposed. Then, a state variable "Average Electricity Price Trend" which is used to express the most possible transitions of the system is developed so as to reduce the complexity and randomicity of the microgrid, and a multi-agent architecture including agents, state variables, action variables and reward function is formulated. Furthermore, dynamic hierarchical reinforcement learning, based on change rate of key state variable, is established to carry out optimal policy exploration. The analysis shows that the proposed method is beneficial to handle the problem of "curse of dimensionality" and speed up learning in the unknown large-scale world. Finally, the simulation results under JADE (Java Agent Development Framework) demonstrate the validity of the presented method in optimal control for a microgrid with grid-connected mode. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Can Process Understanding Help Elucidate The Structure Of The Critical Zone? Comparing Process-Based Soil Formation Models With Digital Soil Mapping.

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, T.; Román, A.; Peña, A.; Laguna, A.; Giráldez, J. V.

    2017-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties in the critical zone. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of traditional digital soil mapping versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  7. Transition in the decay rates of stationary distributions of Lévy motion in an energy landscape.

    PubMed

    Kaleta, Kamil; Lőrinczi, József

    2016-02-01

    The time evolution of random variables with Lévy statistics has the ability to develop jumps, displaying very different behaviors from continuously fluctuating cases. Such patterns appear in an ever broadening range of examples including random lasers, non-Gaussian kinetics, or foraging strategies. The penalizing or reinforcing effect of the environment, however, has been little explored so far. We report a new phenomenon which manifests as a qualitative transition in the spatial decay behavior of the stationary measure of a jump process under an external potential, occurring on a combined change in the characteristics of the process and the lowest eigenvalue resulting from the effect of the potential. This also provides insight into the fundamental question of what is the mechanism of the spatial decay of a ground state.

  8. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    PubMed

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  9. Short-term and practice effects of metronome pacing in Parkinson's disease patients with gait freezing while in the 'on' state: randomized single blind evaluation.

    PubMed

    Cubo, Esther; Leurgans, Sue; Goetz, Christopher G

    2004-12-01

    In a randomized single blind parallel study, we tested the efficacy of an auditory metronome on walking speed and freezing in Parkinson's disease (PD) patients with freezing gait impairment during their 'on' function. No pharmacological treatment is effective in managing 'on' freezing in PD. Like visual cues that can help overcome freezing, rhythmic auditory pacing may provide cues that help normalize walking pace and overcome freezing. Non-demented PD patients with freezing during their 'on' state walked under two conditions, in randomized order: unassisted walking and walking with the use of an audiocassette with a metronome recording. The walking trials were randomized and gait variables were rated from videotapes by a blinded evaluator. Outcome measures were total walking time (total trial time-total freezing time), which was considered the time over a course of specified length, freezing time, average freeze duration and number of freezes. All outcomes were averaged across trials for each person and then compared across conditions using Signed Rank tests. Twelve non-demented PD patients with a mean age of 65.8 +/- 11.2 years, and mean PD duration of 12.4 +/- 7.3 years were included. The use of the metronome slowed ambulation and increased the total walking time (P < 0.0005) only during the first visit, without affecting any freezing variable. In the nine patients who took the metronome recording home and used it daily for 1 week while walking, freezing remained unimproved. Though advocated in prior publications as a walking aid for PD patients, auditory metronome pacing slows walking and is not a beneficial intervention for freezing during their 'on' periods.

  10. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  11. Image discrimination models predict detection in fixed but not random noise

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)

    1997-01-01

    By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.

  12. Comparison of Random Forest and Parametric Imputation Models for Imputing Missing Data Using MICE: A CALIBER Study

    PubMed Central

    Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-01-01

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914

  13. Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.

    PubMed

    Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-03-15

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.

  14. Comparison of Nerve Stimulation-guided Axillary Brachial Plexus Block, Single Injection versus Four Injections: A Prospective Randomized Double-blind Study.

    PubMed

    Badiger, Santoshi V; Desai, Sameer N

    2017-01-01

    A variety of techniques have been described for the axillary block using nerve stimulator, either with single injection, two, three, or four separate injections. Identification of all the four nerves is more difficult and time-consuming than other methods. Aim of the present study is to compare success rate, onset, and duration of sensory and motor anesthesia of axillary block using nerve stimulator, either with single injection after identification of any one of the four nerves or four separate injections following identification of each of nerve. Prospective, randomized, double-blind study. Patients undergoing forearm and hand surgeries under axillary block. One hundred patients, aged 18-75 years, were randomly allocated into two groups of 50 each. Axillary block was performed under the guidance of nerve stimulator with a mixture of 18 ml of 1.5% lignocaine and 18 ml of 0.5% bupivacaine. In the first group ( n = 50), all 36 ml of local anesthetic was injected after the identification of motor response to any one of the nerves and in Group 2, all the four nerves were identified by the motor response, and 9 ml of local anesthetic was injected at each of the nerves. The success rate of the block, onset, and duration of sensory and motor block was assessed. Categorical variables were compared using the Chi-square test, and continuous variables were compared using independent t -test. The success rate of the block with four injection technique was higher compared to single-injection technique (84% vs. 56%, P = 0.02). Four injection groups had a faster onset of sensory and motor block and prolonged duration of analgesia compared to single-injection group ( P < 0.001). There were no significant differences in the incidence of accidental arterial puncture and hemodynamic parameter between the groups. Identification of all the four nerves produced higher success rate and better quality of the block when compared to single-injection technique.

  15. "Congratulations, you have been randomized into the control group!(?)": issues to consider when recruiting schools for matched-pair randomized control trials of prevention programs.

    PubMed

    Ji, Peter; DuBois, David L; Flay, Brian R; Brechling, Vanessa

    2008-03-01

    Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and match/randomize), we would recruit schools and match them prior to randomization, and in the other (match/randomize and recruitment), we would match schools and randomize them prior to recruitment. We considered how each procedure impacted the randomization process and our ability to recruit schools into the study. After implementing the selected procedure, the equivalence of both treatment and control group schools and the participating and nonparticipating schools on school demographic variables was evaluated. We decided on the recruit and match/randomize procedure because we thought it would provide the opportunity to build rapport with the schools and prepare them for the randomization process, thereby increasing the likelihood that they would accept their randomly assigned conditions. Neither the treatment and control group schools nor the participating and nonparticipating schools exhibited statistically significant differences from each other on any of the school demographic variables. Recruitment of schools prior to matching and randomization in an MP-RCT may facilitate the recruitment of schools and thus enhance both the statistical power and the representativeness of study findings. Future research would benefit from the consideration of a broader range of variables (eg, readiness to implement a comprehensive prevention program) both in matching schools and in evaluating their representativeness to nonparticipating schools.

  16. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  17. δ-exceedance records and random adaptive walks

    NASA Astrophysics Data System (ADS)

    Park, Su-Chan; Krug, Joachim

    2016-08-01

    We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.

  18. Experimental studies of the effects of buffered particle dampers attached to a multi-degree-of-freedom system under dynamic loads

    NASA Astrophysics Data System (ADS)

    Lu, Zheng; Lu, Xilin; Lu, Wensheng; Masri, Sami F.

    2012-04-01

    This paper presents a systematic experimental investigation of the effects of buffered particle dampers attached to a multi-degree-of-freedom (mdof) system under different dynamic loads (free vibration, random excitation as well as real onsite earthquake excitations), and analytical/computational study of such a system. A series of shaking table tests of a three-storey steel frame with the buffered particle damper system are carried out to evaluate the performance and to verify the analysis method. It is shown that buffered particle dampers have good performance in reducing the response of structures under dynamic loads, especially under random excitation case. It can effectively control the fundamental mode of the mdof primary system; however, the control effect for higher modes is variable. It is also shown that, for a specific container geometry, a certain mass ratio leads to more efficient momentum transfer from the primary system to the particles with a better vibration attenuation effect, and that buffered particle dampers have better control effect than the conventional rigid ones. An analytical solution based on the discrete element method is also presented. Comparison between the experimental and computational results shows that reasonably accurate estimates of the response of a primary system can be obtained. Properly designed buffered particle dampers can effectively reduce the response of lightly damped mdof primary system with a small weight penalty, under different dynamic loads.

  19. Development of an ideal observer that incorporates nuisance parameters and processes list-mode data

    DOE PAGES

    MacGahan, Christopher Jonathan; Kupinski, Matthew Alan; Hilton, Nathan R.; ...

    2016-02-01

    Observer models were developed to process data in list-mode format in order to perform binary discrimination tasks for use in an arms-control-treaty context. Data used in this study was generated using GEANT4 Monte Carlo simulations for photons using custom models of plutonium inspection objects and a radiation imaging system. We evaluated observer model performance and then presented using the area under the receiver operating characteristic curve. Lastly, we studied the ideal observer under both signal-known-exactly conditions and in the presence of unknowns such as object orientation and absolute count-rate variability; when these additional sources of randomness were present, their incorporationmore » into the observer yielded superior performance.« less

  20. Universal statistics of selected values

    NASA Astrophysics Data System (ADS)

    Smerlak, Matteo; Youssef, Ahmed

    2017-03-01

    Selection, the tendency of some traits to become more frequent than others under the influence of some (natural or artificial) agency, is a key component of Darwinian evolution and countless other natural and social phenomena. Yet a general theory of selection, analogous to the Fisher-Tippett-Gnedenko theory of extreme events, is lacking. Here we introduce a probabilistic definition of selection and show that selected values are attracted to a universal family of limiting distributions which generalize the log-normal distribution. The universality classes and scaling exponents are determined by the tail thickness of the random variable under selection. Our results provide a possible explanation for skewed distributions observed in diverse contexts where selection plays a key role, from molecular biology to agriculture and sport.

  1. Random vectors and spatial analysis by geostatistics for geotechnical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, D.S.

    1987-08-01

    Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less

  2. Working memory and intraindividual variability as neurocognitive indicators in ADHD: examining competing model predictions.

    PubMed

    Kofler, Michael J; Alderson, R Matt; Raiker, Joseph S; Bolden, Jennifer; Sarver, Dustin E; Rapport, Mark D

    2014-05-01

    The current study examined competing predictions of the default mode, cognitive neuroenergetic, and functional working memory models of attention-deficit/hyperactivity disorder (ADHD) regarding the relation between neurocognitive impairments in working memory and intraindividual variability. Twenty-two children with ADHD and 15 typically developing children were assessed on multiple tasks measuring intraindividual reaction time (RT) variability (ex-Gaussian: tau, sigma) and central executive (CE) working memory. Latent factor scores based on multiple, counterbalanced tasks were created for each construct of interest (CE, tau, sigma) to reflect reliable variance associated with each construct and remove task-specific, test-retest, and random error. Bias-corrected, bootstrapped mediation analyses revealed that CE working memory accounted for 88% to 100% of ADHD-related RT variability across models, and between-group differences in RT variability were no longer detectable after accounting for the mediating role of CE working memory. In contrast, RT variability accounted for 10% to 29% of between-group differences in CE working memory, and large magnitude CE working memory deficits remained after accounting for this partial mediation. Statistical comparison of effect size estimates across models suggests directionality of effects, such that the mediation effects of CE working memory on RT variability were significantly greater than the mediation effects of RT variability on CE working memory. The current findings question the role of RT variability as a primary neurocognitive indicator in ADHD and suggest that ADHD-related RT variability may be secondary to underlying deficits in CE working memory.

  3. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  4. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  5. Randomized trial of intermittent or continuous amnioinfusion for variable decelerations.

    PubMed

    Rinehart, B K; Terrone, D A; Barrow, J H; Isler, C M; Barrilleaux, P S; Roberts, W E

    2000-10-01

    To determine whether continuous or intermittent bolus amnioinfusion is more effective in relieving variable decelerations. Patients with repetitive variable decelerations were randomized to an intermittent bolus or continuous amnioinfusion. The intermittent bolus infusion group received boluses of 500 mL of normal saline, each over 30 minutes, with boluses repeated if variable decelerations recurred. The continuous infusion group received a bolus infusion of 500 mL of normal saline over 30 minutes and then 3 mL per minute until delivery occurred. The ability of the amnioinfusion to abolish variable decelerations was analyzed, as were maternal demographic and pregnancy outcome variables. Power analysis indicated that 64 patients would be required. Thirty-five patients were randomized to intermittent infusion and 30 to continuous infusion. There were no differences between groups in terms of maternal demographics, gestational age, delivery mode, neonatal outcome, median time to resolution of variable decelerations, or the number of times variable decelerations recurred. The median volume infused in the intermittent infusion group (500 mL) was significantly less than that in the continuous infusion group (905 mL, P =.003). Intermittent bolus amnioinfusion is as effective as continuous infusion in relieving variable decelerations in labor. Further investigation is necessary to determine whether either of these techniques is associated with increased occurrence of rare complications such as cord prolapse or uterine rupture.

  6. Data-driven Modeling of Metal-oxide Sensors with Dynamic Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Gosangi, Rakesh; Gutierrez-Osuna, Ricardo

    2011-09-01

    We present a data-driven probabilistic framework to model the transient response of MOX sensors modulated with a sequence of voltage steps. Analytical models of MOX sensors are usually built based on the physico-chemical properties of the sensing materials. Although building these models provides an insight into the sensor behavior, they also require a thorough understanding of the underlying operating principles. Here we propose a data-driven approach to characterize the dynamical relationship between sensor inputs and outputs. Namely, we use dynamic Bayesian networks (DBNs), probabilistic models that represent temporal relations between a set of random variables. We identify a set of control variables that influence the sensor responses, create a graphical representation that captures the causal relations between these variables, and finally train the model with experimental data. We validated the approach on experimental data in terms of predictive accuracy and classification performance. Our results show that DBNs can accurately predict the dynamic response of MOX sensors, as well as capture the discriminatory information present in the sensor transients.

  7. Sex modifies the relationship between age and gait: a population-based study of older adults.

    PubMed

    Callisaya, Michele L; Blizzard, Leigh; Schmidt, Michael D; McGinley, Jennifer L; Srikanth, Velandai K

    2008-02-01

    Adequate mobility is essential to maintain an independent and active lifestyle. The aim of this cross-sectional study is to examine the associations of age with temporal and spatial gait variables in a population-based sample of older people, and whether these associations are modified by sex. Men and women aged 60-86 years were randomly selected from the Southern Tasmanian electoral roll (n = 223). Gait speed, step length, cadence, step width, and double-support phase were recorded with a GAITRite walkway. Regression analysis was used to model the relationship between age, sex, and gait variables. For men, after adjusting for height and weight, age was linearly associated with all gait variables (p <.05) except cadence (p =.11). For women, all variables demonstrated a curvilinear association, with age-related change in these variables commencing during the 7th decade. Significant interactions were found between age and sex for speed (p =.04), cadence (p =.01), and double-support phase (p =.03). Associations were observed between age and a broad range of temporal and spatial gait variables in this study. These associations differed by sex, suggesting that the aging process may affect gait in men and women differently. These results provide a basis for further research into sex differences and mechanisms underlying gait changes with advancing age.

  8. Assessment of Scientific Reasoning: the Effects of Task Context, Data, and Design on Student Reasoning in Control of Variables.

    PubMed

    Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao; Bao, Lei

    2016-03-01

    Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students' abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction.

  9. Assessment of Scientific Reasoning: the Effects of Task Context, Data, and Design on Student Reasoning in Control of Variables

    PubMed Central

    Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao

    2015-01-01

    Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students’ abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction. PMID:26949425

  10. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  11. Planckian Information (Ip): A New Measure of Order in Atoms, Enzymes, Cells, Brains, Human Societies, and the Cosmos

    NASA Astrophysics Data System (ADS)

    Ji, Sungchul

    A new mathematical formula referred to as the Planckian distribution equation (PDE) has been found to fit long-tailed histograms generated in various fields of studies, ranging from atomic physics to single-molecule enzymology, cell biology, brain neurobiology, glottometrics, econophysics, and to cosmology. PDE can be derived from a Gaussian-like equation (GLE) by non-linearly transforming its variable, x, while keeping the y coordinate constant. Assuming that GLE represents a random distribution (due to its symmetry), it is possible to define a binary logarithm of the ratio between the areas under the curves of PDE and GLE as a measure of the non-randomness (or order) underlying the biophysicochemical processes generating long-tailed histograms that fit PDE. This new function has been named the Planckian information, IP, which (i) may be a new measure of order that can be applied widely to both natural and human sciences and (ii) can serve as the opposite of the Boltzmann-Gibbs entropy, S, which is a measure of disorder. The possible rationales for the universality of PDE may include (i) the universality of the wave-particle duality embedded in PDE, (ii) the selection of subsets of random processes (thereby breaking the symmetry of GLE) as the basic mechanism of generating order, organization, and function, and (iii) the quantity-quality complementarity as the connection between PDE and Peircean semiotics.

  12. Perioperative bromelain reduces pain and swelling and improves quality of life measures after mandibular third molar surgery: a randomized, double-blind, placebo-controlled clinical trial.

    PubMed

    Majid, Omer Waleed; Al-Mashhadani, Bashar Adil

    2014-06-01

    The purpose of the present study was to compare the effect of oral bromelain (4 × 250 mg) versus oral diclofenac sodium (4 × 25 mg) on pain, swelling, trismus, and quality of life (QOL) after surgical removal of impacted lower third molars. A randomized, double-blind, placebo-controlled study was planned. The sample included patients requiring extraction under local anesthesia of a single partial bony impacted mandibular third molar. The patients were randomly distributed into 1 of 3 groups: bromelain, diclofenac, and placebo. Treatment started 1 day before surgery and continued for 4 days. The predictor variable was the type of the drug given to the patients. The outcome variables were pain, swelling, and trismus, which were measured at 1, 3, and 7 days postoperatively. The other study variables included QOL measures to assess the patients' perception regarding the effect of surgery on their well-being and daily activities. A validated questionnaire was used to measure QOL. The data were analyzed using analysis of variance, multiple measures analysis of variance, or Pearson's χ(2) test, as appropriate. P < .05 was considered significant. A total of 45 subjects requiring surgical removal of a single impacted mandibular third molar under local anesthesia were included in the present study. The bromelain and diclofenac groups both showed a significant reduction in pain compared with the placebo group at all intervals (P < .05). Diclofenac also resulted in a significant reduction of swelling at 3 and 7 days, and bromelain resulted in an insignificant reduction. A nonsignificant reduction in trismus occurred in both treatment groups compared with the placebo group. Both treatment groups also showed a significant difference in the effect on QOL in most subscales and total scores (P < .05). The effect was comparable between the 2 treatment groups for all parameters and at all intervals. The results of our study have shown that oral bromelain is an effective therapy to improve the QOL after surgical removal of impacted lower third molars, with an effect on the postoperative sequelae comparable to that of preemptive diclofenac sodium. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  13. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  14. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  15. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    NASA Astrophysics Data System (ADS)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  16. Thermal Runaway in Jammed Networks

    NASA Astrophysics Data System (ADS)

    Lechman, Jeremy; Yarrington, Cole; Bolintineanu, Dan

    2017-06-01

    The study of thermal explosion has a long history. Names such as Semenov and Frank-Kamenetskii are associated with classical model descriptions under particular assumptions. In this talk we revisit this problem with particular focus on the latter's model for conduction dominated thermal transport and Arrenhius-type reaction chemistry. We extend this description to the case of inhomogeneous microstructure generated by packing mono-sized spheres via a well-defined ``Jamming'' protocol. With these material structures in hand, we recast the Frank-Kamenetskii problem into a reduced-order network form for conduction in particle packs. With this model we can efficiently investigate the variability of the time to ignition due to the random microstructure. Additionally, we propose a modal decomposition and stability analysis of the model akin to stability of linear systems. We highlight the physical insights this approach can give with respect to questions of material dependent performance variability. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a Lockheed-Martin Company, for the U. S. Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.

  17. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  18. Adjusted Analyses in Studies Addressing Therapy and Harm: Users' Guides to the Medical Literature.

    PubMed

    Agoritsas, Thomas; Merglen, Arnaud; Shah, Nilay D; O'Donnell, Martin; Guyatt, Gordon H

    2017-02-21

    Observational studies almost always have bias because prognostic factors are unequally distributed between patients exposed or not exposed to an intervention. The standard approach to dealing with this problem is adjusted or stratified analysis. Its principle is to use measurement of risk factors to create prognostically homogeneous groups and to combine effect estimates across groups.The purpose of this Users' Guide is to introduce readers to fundamental concepts underlying adjustment as a way of dealing with prognostic imbalance and to the basic principles and relative trustworthiness of various adjustment strategies.One alternative to the standard approach is propensity analysis, in which groups are matched according to the likelihood of membership in exposed or unexposed groups. Propensity methods can deal with multiple prognostic factors, even if there are relatively few patients having outcome events. However, propensity methods do not address other limitations of traditional adjustment: investigators may not have measured all relevant prognostic factors (or not accurately), and unknown factors may bias the results.A second approach, instrumental variable analysis, relies on identifying a variable associated with the likelihood of receiving the intervention but not associated with any prognostic factor or with the outcome (other than through the intervention); this could mimic randomization. However, as with assumptions of other adjustment approaches, it is never certain if an instrumental variable analysis eliminates bias.Although all these approaches can reduce the risk of bias in observational studies, none replace the balance of both known and unknown prognostic factors offered by randomization.

  19. Moderation analysis with missing data in the predictors.

    PubMed

    Zhang, Qian; Wang, Lijuan

    2017-12-01

    The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Modeled streamflow metrics on small, ungaged stream reaches in the Upper Colorado River Basin

    USGS Publications Warehouse

    Reynolds, Lindsay V.; Shafroth, Patrick B.

    2016-01-20

    Modeling streamflow is an important approach for understanding landscape-scale drivers of flow and estimating flows where there are no streamgage records. In this study conducted by the U.S. Geological Survey in cooperation with Colorado State University, the objectives were to model streamflow metrics on small, ungaged streams in the Upper Colorado River Basin and identify streams that are potentially threatened with becoming intermittent under drier climate conditions. The Upper Colorado River Basin is a region that is critical for water resources and also projected to experience large future climate shifts toward a drying climate. A random forest modeling approach was used to model the relationship between streamflow metrics and environmental variables. Flow metrics were then projected to ungaged reaches in the Upper Colorado River Basin using environmental variables for each stream, represented as raster cells, in the basin. Last, the projected random forest models of minimum flow coefficient of variation and specific mean daily flow were used to highlight streams that had greater than 61.84 percent minimum flow coefficient of variation and less than 0.096 specific mean daily flow and suggested that these streams will be most threatened to shift to intermittent flow regimes under drier climate conditions. Map projection products can help scientists, land managers, and policymakers understand current hydrology in the Upper Colorado River Basin and make informed decisions regarding water resources. With knowledge of which streams are likely to undergo significant drying in the future, managers and scientists can plan for stream-dependent ecosystems and human water users.

  1. Continuous glucose monitoring to assess the ecologic validity of dietary glycemic index and glycemic load123

    PubMed Central

    Ebbeling, Cara B; Wadden, Thomas A; Ludwig, David S

    2011-01-01

    Background: The circumstances under which the glycemic index (GI) and glycemic load (GL) are derived do not reflect real-world eating behavior. Thus, the ecologic validity of these constructs is incompletely known. Objective: This study examined the relation of dietary intake to glycemic response when foods are consumed under free-living conditions. Design: Participants were 26 overweight or obese adults with type 2 diabetes who participated in a randomized trial of lifestyle modification. The current study includes baseline data, before initiation of the intervention. Participants wore a continuous glucose monitor and simultaneously kept a food diary for 3 d. The dietary variables included GI, GL, and intakes of energy, fat, protein, carbohydrate, sugars, and fiber. The glycemic response variables included AUC, mean and SD of continuous glucose monitoring (CGM) values, percentage of CGM values in euglycemic and hyperglycemic ranges, and mean amplitude of glycemic excursions. Relations between daily dietary intake and glycemic outcomes were examined. Results: Data were available from 41 d of monitoring. Partial correlations, controlled for energy intake, indicated that GI or GL was significantly associated with each glycemic response outcome. In multivariate analyses, dietary GI accounted for 10% to 18% of the variance in each glycemic variable, independent of energy and carbohydrate intakes (P < 0.01). Conclusions: The data support the ecologic validity of the GI and GL constructs in free-living obese adults with type 2 diabetes. GI was the strongest and most consistent independent predictor of glycemic stability and variability. PMID:22071699

  2. Mapping SOC (Soil Organic Carbon) using LiDAR-derived vegetation indices in a random forest regression model

    NASA Astrophysics Data System (ADS)

    Will, R. M.; Glenn, N. F.; Benner, S. G.; Pierce, J. L.; Spaete, L.; Li, A.

    2015-12-01

    Quantifying SOC (Soil Organic Carbon) storage in complex terrain is challenging due to high spatial variability. Generally, the challenge is met by transforming point data to the entire landscape using surrogate, spatially-distributed, variables like elevation or precipitation. In many ecosystems, remotely sensed information on above-ground vegetation (e.g. NDVI) is a good predictor of below-ground carbon stocks. In this project, we are attempting to improve this predictive method by incorporating LiDAR-derived vegetation indices. LiDAR provides a mechanism for improved characterization of aboveground vegetation by providing structural parameters such as vegetation height and biomass. In this study, a random forest model is used to predict SOC using a suite of LiDAR-derived vegetation indices as predictor variables. The Reynolds Creek Experimental Watershed (RCEW) is an ideal location for a study of this type since it encompasses a strong elevation/precipitation gradient that supports lower biomass sagebrush ecosystems at low elevations and forests with more biomass at higher elevations. Sagebrush ecosystems composed of Wyoming, Low and Mountain Sagebrush have SOC values ranging from .4 to 1% (top 30 cm), while higher biomass ecosystems composed of aspen, juniper and fir have SOC values approaching 4% (top 30 cm). Large differences in SOC have been observed between canopy and interspace locations and high resolution vegetation information is likely to explain plot scale variability in SOC. Mapping of the SOC reservoir will help identify underlying controls on SOC distribution and provide insight into which processes are most important in determining SOC in semi-arid mountainous regions. In addition, airborne LiDAR has the potential to characterize vegetation communities at a high resolution and could be a tool for improving estimates of SOC at larger scales.

  3. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  4. Low within- and between-day variability in exposure to new insulin glargine 300 U/ml.

    PubMed

    Becker, R H A; Nowotny, I; Teichert, L; Bergmann, K; Kapitza, C

    2015-03-01

    To characterize the variability in exposure and metabolic effect of insulin glargine 300 U/ml (Gla-300) at steady state in people with type 1 diabetes (T1DM). A total of 50 participants with T1DM underwent two 24-h euglycaemic clamps in steady-state conditions after six once-daily administrations of 0.4 U/kg Gla-300 in a double-blind, randomized, two-treatment, two-period, crossover clamp study. Participants were randomized to receive Gla-300 as a standard cartridge formulation in the first treatment period, and as a formulation with enhanced stability through polysorbate-20 addition in the second treatment period, or vice versa. This design allowed the assessment of bioequivalence between formulations and, subsequently, within- and between-day variability. The cumulative exposure and effect of Gla-300 developed linearly over 24 h, and were evenly distributed across 6- and 12-h intervals. Diurnal fluctuation in exposure (within-day variability) was low; the peak-to-trough ratio of insulin concentration profiles was <2, and both the swing and peak-to-trough fluctuation were <1. Day-to-day reproducibility of exposure was high: the between-day within-subject coefficients of variation for total systemic exposure (area under the serum insulin glargine concentration time curve from time 0 to 24 h after dosing) and maximum insulin concentration were 17.4% [95% confidence interval (CI) 15-21] and 33.4% (95% CI 28-41), respectively. Reproducibility of the metabolic effect was lower than that of exposure. Gla-300 provides predictable, evenly distributed 24-h coverage as a result of low fluctuation and high reproducibility in insulin exposure, and appears suitable for effective basal insulin use. © 2014 The Authors. Diabetes, Obesity and Metabolism published by John Wiley & Sons Ltd.

  5. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  6. Model of wealth and goods dynamics in a closed market

    NASA Astrophysics Data System (ADS)

    Ausloos, Marcel; Peķalski, Andrzej

    2007-01-01

    A simple computer simulation model of a closed market on a fixed network with free flow of goods and money is introduced. The model contains only two variables: the amount of goods and money beside the size of the system. An initially flat distribution of both variables is presupposed. We show that under completely random rules, i.e. through the choice of interacting agent pairs on the network and of the exchange rules that the market stabilizes in time and shows diversification of money and goods. We also indicate that the difference between poor and rich agents increases for small markets, as well as for systems in which money is steadily deduced from the market through taxation. It is also found that the price of goods decreases when taxes are introduced, likely due to the less availability of money.

  7. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  8. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  9. Computer Modelling and Simulation of Solar PV Array Characteristics

    NASA Astrophysics Data System (ADS)

    Gautam, Nalin Kumar

    2003-02-01

    The main objective of my PhD research work was to study the behaviour of inter-connected solar photovoltaic (PV) arrays. The approach involved the construction of mathematical models to investigate different types of research problems related to the energy yield, fault tolerance, efficiency and optimal sizing of inter-connected solar PV array systems. My research work can be divided into four different types of research problems: 1. Modeling of inter-connected solar PV array systems to investigate their electrical behavior, 2. Modeling of different inter-connected solar PV array networks to predict their expected operational lifetimes, 3. Modeling solar radiation estimation and its variability, and 4. Modeling of a coupled system to estimate the size of PV array and battery-bank in the stand-alone inter-connected solar PV system where the solar PV system depends on a system providing solar radiant energy. The successful application of mathematics to the above-m entioned problems entailed three phases: 1. The formulation of the problem in a mathematical form using numerical, optimization, probabilistic and statistical methods / techniques, 2. The translation of mathematical models using C++ to simulate them on a computer, and 3. The interpretation of the results to see how closely they correlated with the real data. Array is the most cost-intensive component of the solar PV system. Since the electrical performances as well as life properties of an array are highly sensitive to field conditions, different characteristics of the arrays, such as energy yield, operational lifetime, collector orientation, and optimal sizing were investigated in order to improve their efficiency, fault-tolerance and reliability. Three solar cell interconnection configurations in the array - series-parallel, total-cross-tied, and bridge-linked, were considered. The electrical characteristics of these configurations were investigated to find out one that is comparatively less susceptible to the mismatches due to manufacturer's tolerances in cell characteristics, shadowing, soiling and aging of solar cells. The current-voltage curves and the values of energy yield characterized by maximum-power points and fill factors for these arrays were also obtained. Two different mathematical models, one for smaller size arrays and the other for the larger size arrays, were developed. The first model takes account of the partial differential equations with boundary value conditions, whereas the second one involves the simple linear programming concept. Based on the initial information on the values of short-circuit current and open-circuit voltage of thirty-six single-crystalline silicon solar cells provided by a manufacturer, the values of these parameters for up to 14,400 solar cells were generated randomly. Thus, the investigations were done for three different cases of array sizes, i.e., (6 x 6), (36 x 8) and (720 x 20), for each configuration. The operational lifetimes of different interconnected solar PV arrays and the improvement in their life properties through different interconnection and modularized configurations were investigated using a reliability-index model. Under normal conditions, the efficiency of a solar cell degrades in an exponential manner, and its operational life above a lowest admissible efficiency may be considered as the upper bound of its lifetime. Under field conditions, the solar cell may fail any time due to environmental stresses, or it may function up to the end of its expected lifetime. In view of this, the lifetime of a solar cell in an array was represented by an exponentially distributed random variable. At any instant of time t, this random variable was considered to have two states: (i) the cell functioned till time t, or (ii) the cell failed within time t. It was considered that the functioning of the solar cell included its operation at an efficiency decaying with time under normal conditions. It was assumed that the lifetime of a solar cell had lack of memory or aging property, which meant that no matter how long (say, t) the cell had been operational, the probability that it would last an additional time ?t was independent of t. The operational life of the solar cell above a lowest admissible efficiency was considered as the upper bound of its expected lifetime. The value of the upper bound on the expected life of solar cell was evaluated using the information provided by the manufacturers of the single-crystalline silicon solar cells. Then on the basis of these lifetimes, the expected operational lifetimes of the array systems were obtained. Since the investigations of the effects of collector orientation on the performance of an array require the continuous values of global solar radiation on a surface, a method to estimate the global solar radiation on a surface (horizontal or tilted) was also proposed. The cloudiness index was defined as the fraction of extraterrestrial radiation that reached the earth's surface when the sky above the location of interest was obscured by the cloud cover. The cloud cover at the location of interest during any time interval of a day was assumed to follow the fuzzy random phenomenon. The cloudiness index, therefore, was considered as a fuzzy random variable that accounted for the cloud cover at the location of interest during any time interval of a day. This variable was assumed to depend on four other fuzzy random variables that, respectively, accounted for the cloud cover corresponding to the 1) type of cloud group, 2) climatic region, 3) season with most of the precipitation, and 4) type of precipitation at the location of interest during any time interval. All possible types of cloud covers were categorized into five types of cloud groups. Each cloud group was considered to be a fuzzy subset. In this model, the cloud cover at the location of interest during a time interval was considered to be the clouds that obscure the sky above the location. The cloud covers, with all possible types of clouds having transmissivities corresponding to values in the membership range of a fuzzy subset (i.e., a type of cloud group), were considered to be the membership elements of that fuzzy subset. The transmissivities of different types of cloud covers in a cloud group corresponded to the values in the membership range of that cloud group. Predicate logic (i.e., if---then---, else---, conditions) was used to set the relationship between all the fuzzy random variables. The values of the above-mentioned fuzzy random variables were evaluated to provide the value of cloudiness index for each time interval at the location of interest. For each case of the fuzzy random variable, heuristic approach was used to identify subjectively the range ([a, b], where a and b were real numbers with in [0, 1] such that a

  10. Adaptation of warrant price with Black Scholes model and historical volatility

    NASA Astrophysics Data System (ADS)

    Aziz, Khairu Azlan Abd; Idris, Mohd Fazril Izhar Mohd; Saian, Rizauddin; Daud, Wan Suhana Wan

    2015-05-01

    This project discusses about pricing warrant in Malaysia. The Black Scholes model with non-dividend approach and linear interpolation technique was applied in pricing the call warrant. Three call warrants that are listed in Bursa Malaysia were selected randomly from UiTM's datastream. The finding claims that the volatility for each call warrants are different to each other. We have used the historical volatility which will describes the price movement by which an underlying share is expected to fluctuate within a period. The Black Scholes model price that was obtained by the model will be compared with the actual market price. Mispricing the call warrants will contribute to under or over valuation price. Other variables like interest rate, time to maturity date, exercise price and underlying stock price are involves in pricing call warrants as well as measuring the moneyness of call warrants.

  11. Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model

    PubMed Central

    NING, JING; QIN, JING; SHEN, YU

    2014-01-01

    SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727

  12. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  13. Predicting Survival From Large Echocardiography and Electronic Health Record Datasets: Optimization With Machine Learning.

    PubMed

    Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K

    2018-06-09

    The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  14. Evaluating physical habitat and water chemistry data from statewide stream monitoring programs to establish least-impacted conditions in Washington State

    USGS Publications Warehouse

    Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad

    2015-01-01

    Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.

  15. Commercialization of NESSUS: Status

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Millwater, Harry R.

    1991-01-01

    A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.

  16. Assessing the significance of global and local correlations under spatial autocorrelation: a nonparametric approach.

    PubMed

    Viladomat, Júlia; Mazumder, Rahul; McInturff, Alex; McCauley, Douglas J; Hastie, Trevor

    2014-06-01

    We propose a method to test the correlation of two random fields when they are both spatially autocorrelated. In this scenario, the assumption of independence for the pair of observations in the standard test does not hold, and as a result we reject in many cases where there is no effect (the precision of the null distribution is overestimated). Our method recovers the null distribution taking into account the autocorrelation. It uses Monte-Carlo methods, and focuses on permuting, and then smoothing and scaling one of the variables to destroy the correlation with the other, while maintaining at the same time the initial autocorrelation. With this simulation model, any test based on the independence of two (or more) random fields can be constructed. This research was motivated by a project in biodiversity and conservation in the Biology Department at Stanford University. © 2014, The International Biometric Society.

  17. Using an Informative Missing Data Model to Predict the Ability to Assess Recovery of Balance Control after Spaceflight

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Wood, Scott J.; Jain, Varsha

    2008-01-01

    Astronauts show degraded balance control immediately after spaceflight. To assess this change, astronauts' ability to maintain a fixed stance under several challenging stimuli on a movable platform is quantified by "equilibrium" scores (EQs) on a scale of 0 to 100, where 100 represents perfect control (sway angle of 0) and 0 represents data loss where no sway angle is observed because the subject has to be restrained from falling. By comparing post- to pre-flight EQs for actual astronauts vs. controls, we built a classifier for deciding when an astronaut has recovered. Future diagnostic performance depends both on the sampling distribution of the classifier as well as the distribution of its input data. Taking this into consideration, we constructed a predictive ROC by simulation after modeling P(EQ = 0) in terms of a latent EQ-like beta-distributed random variable with random effects.

  18. Non-manipulation quantitative designs.

    PubMed

    Rumrill, Phillip D

    2004-01-01

    The article describes non-manipulation quantitative designs of two types, correlational and causal comparative studies. Both of these designs are characterized by the absence of random assignment of research participants to conditions or groups and non-manipulation of the independent variable. Without random selection or manipulation of the independent variable, no attempt is made to draw causal inferences regarding relationships between independent and dependent variables. Nonetheless, non-manipulation studies play an important role in rehabilitation research, as described in this article. Examples from the contemporary rehabilitation literature are included. Copyright 2004 IOS Press

  19. Morinda citrifolia (Noni) as an Anti-Inflammatory Treatment in Women with Primary Dysmenorrhoea: A Randomised Double-Blind Placebo-Controlled Trial.

    PubMed

    Fletcher, H M; Dawkins, J; Rattray, C; Wharfe, G; Reid, M; Gordon-Strachan, G

    2013-01-01

    Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo.

  20. Morinda citrifolia (Noni) as an Anti-Inflammatory Treatment in Women with Primary Dysmenorrhoea: A Randomised Double-Blind Placebo-Controlled Trial

    PubMed Central

    Fletcher, H. M.; Dawkins, J.; Rattray, C.; Wharfe, G.; Reid, M.; Gordon-Strachan, G.

    2013-01-01

    Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo. PMID:23431314

  1. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    NASA Astrophysics Data System (ADS)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  2. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  3. Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Landini, Luigi; Santarelli, Maria Filomena

    2016-07-01

    To develop a 3D sampling strategy based on a stack of variable density spirals for compressive sensing MRI. A random sampling pattern was obtained by rotating each spiral by a random angle and by delaying for few time steps the gradient waveforms of the different interleaves. A three-dimensional (3D) variable sampling density was obtained by designing different variable density spirals for each slice encoding. The proposed approach was tested with phantom simulations up to a five-fold undersampling factor. Fully sampled 3D dataset of a human knee, and of a human brain, were obtained from a healthy volunteer. The proposed approach was tested with off-line reconstructions of the knee dataset up to a four-fold acceleration and compared with other noncoherent trajectories. The proposed approach outperformed the standard stack of spirals for various undersampling factors. The level of coherence and the reconstruction quality of the proposed approach were similar to those of other trajectories that, however, require 3D gridding for the reconstruction. The variable density randomized stack of spirals (VDR-SoS) is an easily implementable trajectory that could represent a valid sampling strategy for 3D compressive sensing MRI. It guarantees low levels of coherence without requiring 3D gridding. Magn Reson Med 76:59-69, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. Virtual Reality Rehabilitation from Social Cognitive and Motor Learning Theoretical Perspectives in Stroke Population

    PubMed Central

    Imam, Bita; Jarus, Tal

    2014-01-01

    Objectives. To identify the virtual reality (VR) interventions used for the lower extremity rehabilitation in stroke population and to explain their underlying training mechanisms using Social Cognitive (SCT) and Motor Learning (MLT) theoretical frameworks. Methods. Medline, Embase, Cinahl, and Cochrane databases were searched up to July 11, 2013. Randomized controlled trials that included a VR intervention for lower extremity rehabilitation in stroke population were included. The Physiotherapy Evidence Database (PEDro) scale was used to assess the quality of the included studies. The underlying training mechanisms involved in each VR intervention were explained according to the principles of SCT (vicarious learning, performance accomplishment, and verbal persuasion) and MLT (focus of attention, order and predictability of practice, augmented feedback, and feedback fading). Results. Eleven studies were included. PEDro scores varied from 3 to 7/10. All studies but one showed significant improvement in outcomes in favour of the VR group (P < 0.05). Ten VR interventions followed the principle of performance accomplishment. All the eleven VR interventions directed subject's attention externally, whereas nine provided training in an unpredictable and variable fashion. Conclusions. The results of this review suggest that VR applications used for lower extremity rehabilitation in stroke population predominantly mediate learning through providing a task-oriented and graduated learning under a variable and unpredictable practice. PMID:24523967

  5. Great influence of geographic isolation on the genetic differentiation of Myriophyllum spicatum under a steep environmental gradient

    PubMed Central

    Wu, Zhigang; Yu, Dan; Wang, Zhong; Li, Xing; Xu, Xinwei

    2015-01-01

    Understanding how natural processes affect population genetic structures is an important issue in evolutionary biology. One effective method is to assess the relative importance of environmental and geographical factors in the genetic structure of populations. In this study, we examined the spatial genetic variation of thirteen Myriophyllum spicatum populations from the Qinghai-Tibetan Plateau (QTP) and adjacent highlands (Yunnan-Guizhou Plateau, YGP) by using microsatellite loci and environmental and geographical factors. Bioclim layers, hydrological properties and elevation were considered as environmental variables and reduced by principal component analysis. The genetic isolation by geographic distance (IBD) was tested by Mantel tests and the relative importance of environmental variables on population genetic differentiation was determined by a partial Mantel test and multiple matrix regression with randomization (MMRR). Two genetic clusters corresponding to the QTP and YGP were identified. Both tests and MMRR revealed a significant and strong correlation between genetic divergence and geographic isolation under the influence of environmental heterogeneity at the overall and finer spatial scales. Our findings suggested the dominant role of geography on the evolution of M. spicatum under a steep environmental gradient in the alpine landscape as a result of dispersal limitation and genetic drift. PMID:26494202

  6. Random Item IRT Models

    ERIC Educational Resources Information Center

    De Boeck, Paul

    2008-01-01

    It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…

  7. Is the Non-Dipole Magnetic Field Random?

    NASA Technical Reports Server (NTRS)

    Walker, Andrew D.; Backus, George E.

    1996-01-01

    Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.

  8. Diagnosis and management of small intestinal bacterial overgrowth.

    PubMed

    Bohm, Matthew; Siwiec, Robert M; Wo, John M

    2013-06-01

    Small intestinal bacterial overgrowth (SIBO) can result from failure of the gastric acid barrier, failure of small intestinal motility, anatomic alterations, or impairment of systemic and local immunity. The current accepted criteria for the diagnosis of SIBO is the presence of coliform bacteria isolated from the proximal jejunum with >10(5) colony-forming units/mL. A major concern with luminal aspiration is that it is only one random sampling of the small intestine and may not always be representative of the underlying microbiota. A new approach to examine the underlying microbiota uses rapid molecular sequencing, but its clinical utilization is still under active investigation. Clinical manifestations of SIBO are variable and include bloating, flatulence, abdominal distention, abdominal pain, and diarrhea. Severe cases may present with nutrition deficiencies due to malabsorption of micro- and macronutrients. The current management strategies for SIBO center on identifying and correcting underlying causes, addressing nutrition deficiencies, and judicious utilization of antibiotics to treat symptomatic SIBO.

  9. Are glucose levels, glucose variability and autonomic control influenced by inspiratory muscle exercise in patients with type 2 diabetes? Study protocol for a randomized controlled trial.

    PubMed

    Schein, Aso; Correa, Aps; Casali, Karina Rabello; Schaan, Beatriz D

    2016-01-20

    Physical exercise reduces glucose levels and glucose variability in patients with type 2 diabetes. Acute inspiratory muscle exercise has been shown to reduce these parameters in a small group of patients with type 2 diabetes, but these results have yet to be confirmed in a well-designed study. The aim of this study is to investigate the effect of acute inspiratory muscle exercise on glucose levels, glucose variability, and cardiovascular autonomic function in patients with type 2 diabetes. This study will use a randomized clinical trial crossover design. A total of 14 subjects will be recruited and randomly allocated to two groups to perform acute inspiratory muscle loading at 2 % of maximal inspiratory pressure (PImax, placebo load) or 60 % of PImax (experimental load). Inspiratory muscle training could be a novel exercise modality to be used to decrease glucose levels and glucose variability. ClinicalTrials.gov NCT02292810 .

  10. Testing Pairwise Association between Spatially Autocorrelated Variables: A New Approach Using Surrogate Lattice Data

    PubMed Central

    Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre

    2012-01-01

    Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961

  11. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  12. Design for cyclic loading endurance of composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.

    1993-01-01

    The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.

  13. Molecular and Kinetic Models for High-rate Thermal Degradation of Polyethylene

    DOE PAGES

    Lane, J. Matthew; Moore, Nathan W.

    2018-02-01

    Thermal degradation of polyethylene is studied under the extremely high rate temperature ramps expected in laser-driven and X-ray ablation experiments—from 10 10 to 10 14 K/s in isochoric, condensed phases. The molecular evolution and macroscopic state variables are extracted as a function of density from reactive molecular dynamics simulations using the ReaxFF potential. The enthalpy, dissociation onset temperature, bond evolution, and observed cross-linking are shown to be rate dependent. These results are used to parametrize a kinetic rate model for the decomposition and coalescence of hydrocarbons as a function of temperature, temperature ramp rate, and density. In conclusion, the resultsmore » are contrasted to first-order random-scission macrokinetic models often assumed for pyrolysis of linear polyethylene under ambient conditions.« less

  14. Molecular and Kinetic Models for High-rate Thermal Degradation of Polyethylene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lane, J. Matthew; Moore, Nathan W.

    Thermal degradation of polyethylene is studied under the extremely high rate temperature ramps expected in laser-driven and X-ray ablation experiments—from 10 10 to 10 14 K/s in isochoric, condensed phases. The molecular evolution and macroscopic state variables are extracted as a function of density from reactive molecular dynamics simulations using the ReaxFF potential. The enthalpy, dissociation onset temperature, bond evolution, and observed cross-linking are shown to be rate dependent. These results are used to parametrize a kinetic rate model for the decomposition and coalescence of hydrocarbons as a function of temperature, temperature ramp rate, and density. In conclusion, the resultsmore » are contrasted to first-order random-scission macrokinetic models often assumed for pyrolysis of linear polyethylene under ambient conditions.« less

  15. Planetarium instructional efficacy: A research synthesis

    NASA Astrophysics Data System (ADS)

    Brazell, Bruce D.

    The purpose of the current study was to explore the instructional effectiveness of the planetarium in astronomy education using meta-analysis. A review of the literature revealed 46 studies related to planetarium efficacy. However, only 19 of the studies satisfied selection criteria for inclusion in the meta-analysis. Selected studies were then subjected to coding procedures, which extracted information such as subject characteristics, experimental design, and outcome measures. From these data, 24 effect sizes were calculated in the area of student achievement and five effect sizes were determined in the area of student attitudes using reported statistical information. Mean effect sizes were calculated for both the achievement and the attitude distributions. Additionally, each effect size distribution was subjected to homogeneity analysis. The attitude distribution was found to be homogeneous with a mean effect size of -0.09, which was not significant, p = .2535. The achievement distribution was found to be heterogeneous with a statistically significant mean effect size of +0.28, p < .05. Since the achievement distribution was heterogeneous, the analog to the ANOVA procedure was employed to explore variability in this distribution in terms of the coded variables. The analog to the ANOVA procedure revealed that the variability introduced by the coded variables did not fully explain the variability in the achievement distribution beyond subject-level sampling error under a fixed effects model. Therefore, a random effects model analysis was performed which resulted in a mean effect size of +0.18, which was not significant, p = .2363. However, a large random effect variance component was determined indicating that the differences between studies were systematic and yet to be revealed. The findings of this meta-analysis showed that the planetarium has been an effective instructional tool in astronomy education in terms of student achievement. However, the meta-analysis revealed that the planetarium has not been a very effective tool for improving student attitudes towards astronomy.

  16. Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models

    USGS Publications Warehouse

    Geist, Eric L.

    2016-01-01

    Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ∼ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20–30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.

  17. Patellar denervation with electrocautery in total knee arthroplasty without patellar resurfacing: a meta-analysis.

    PubMed

    Cheng, Tao; Zhu, Chen; Guo, Yongyuan; Shi, Sifeng; Chen, Desheng; Zhang, Xianlong

    2014-11-01

    The impact of patellar denervation with electrocautery in total knee arthroplasty (TKA) on post-operative outcomes has been under debate. This study aims to conduct a meta-analysis and systematic review to compare the benefits and risks of circumpatellar electrocautery with those of non-electrocautery in primary TKAs. Comparative and randomized clinical studies were identified by conducting an electronic search of articles dated up to September 2012 in PubMed, EMBASE, Scopus, and the Cochrane databases. Six studies that focus on a total of 849 knees were analysed. A random-effects model was conducted using the inverse-variance method for continuous variables and the Mantel-Haenszel method for dichotomous variables. There was no significant difference in the incidence of anterior knee pain between the electrocautery and non-electrocautery groups. In term of patellar score and Knee Society Score, circumpatellar electrocautery improved clinical outcomes compared with non-electrocautery in TKAs. The statistical differences were in favour of the electrocautery group but have minimal clinical significance. In addition, the overall complications indicate no statistical significance between the two groups. This study shows no strong evidence either for or against electrocautery compared with non-electrocautery in TKAs. Therapeutic study (systematic review and meta-analysis), Level III.

  18. An Integrated Probabilistic-Fuzzy Assessment of Uncertainty Associated with Human Health Risk to MSW Landfill Leachate Contamination

    NASA Astrophysics Data System (ADS)

    Mishra, H.; Karmakar, S.; Kumar, R.

    2016-12-01

    Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.

  19. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  20. Intergenerational resource transfers with random offspring numbers

    PubMed Central

    Arrow, Kenneth J.; Levin, Simon A.

    2009-01-01

    A problem common to biology and economics is the transfer of resources from parents to children. We consider the issue under the assumption that the number of offspring is unknown and can be represented as a random variable. There are 3 basic assumptions. The first assumption is that a given body of resources can be divided into consumption (yielding satisfaction) and transfer to children. The second assumption is that the parents' welfare includes a concern for the welfare of their children; this is recursive in the sense that the children's welfares include concern for their children and so forth. However, the welfare of a child from a given consumption is counted somewhat differently (generally less) than that of the parent (the welfare of a child is “discounted”). The third assumption is that resources transferred may grow (or decline). In economic language, investment, including that in education or nutrition, is productive. Under suitable restrictions, precise formulas for the resulting allocation of resources are found, demonstrating that, depending on the shape of the utility curve, uncertainty regarding the number of offspring may or may not favor increased consumption. The results imply that wealth (stock of resources) will ultimately have a log-normal distribution. PMID:19617553

  1. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  2. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  3. On the comparison of the strength of morphological integration across morphometric datasets.

    PubMed

    Adams, Dean C; Collyer, Michael L

    2016-11-01

    Evolutionary morphologists frequently wish to understand the extent to which organisms are integrated, and whether the strength of morphological integration among subsets of phenotypic variables differ among taxa or other groups. However, comparisons of the strength of integration across datasets are difficult, in part because the summary measures that characterize these patterns (RV coefficient and r PLS ) are dependent both on sample size and on the number of variables. As a solution to this issue, we propose a standardized test statistic (a z-score) for measuring the degree of morphological integration between sets of variables. The approach is based on a partial least squares analysis of trait covariation, and its permutation-based sampling distribution. Under the null hypothesis of a random association of variables, the method displays a constant expected value and confidence intervals for datasets of differing sample sizes and variable number, thereby providing a consistent measure of integration suitable for comparisons across datasets. A two-sample test is also proposed to statistically determine whether levels of integration differ between datasets, and an empirical example examining cranial shape integration in Mediterranean wall lizards illustrates its use. Some extensions of the procedure are also discussed. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  4. The JCMT Transient Survey: Stochastic and Secular Variability of Protostars and Disks In the Submillimeter Region Observed over 18 Months

    NASA Astrophysics Data System (ADS)

    Johnstone, Doug; Herczeg, Gregory J.; Mairs, Steve; Hatchell, Jennifer; Bower, Geoffrey C.; Kirk, Helen; Lane, James; Bell, Graham S.; Graves, Sarah; Aikawa, Yuri; Chen, Huei-Ru Vivien; Chen, Wen-Ping; Kang, Miju; Kang, Sung-Ju; Lee, Jeong-Eun; Morata, Oscar; Pon, Andy; Scicluna, Peter; Scholz, Aleks; Takahashi, Satoko; Yoo, Hyunju; The JCMT Transient Team

    2018-02-01

    We analyze results from the first 18 months of monthly submillimeter monitoring of eight star-forming regions in the JCMT Transient Survey. In our search for stochastic variability in 1643 bright peaks, only the previously identified source, EC 53, shows behavior well above the expected measurement uncertainty. Another four sources—two disks and two protostars—show moderately enhanced standard deviations in brightness, as expected for stochastic variables. For the two protostars, this apparent variability is the result of single epochs that are much brighter than the mean. In our search for secular brightness variations that are linear in time, we measure the fractional brightness change per year for 150 bright peaks, 50 of which are protostellar. The ensemble distribution of slopes is well fit by a normal distribution with σ ∼ 0.023. Most sources are not rapidly brightening or fading at submillimeter wavelengths. Comparison against time-randomized realizations shows that the width of the distribution is dominated by the uncertainty in the individual brightness measurements of the sources. A toy model for secular variability reveals that an underlying Gaussian distribution of linear fractional brightness change σ = 0.005 would be unobservable in the present sample, whereas an underlying distribution with σ = 0.02 is ruled out. Five protostellar sources, 10% of the protostellar sample, are found to have robust secular measures deviating from a constant flux. The sensitivity to secular brightness variations will improve significantly with a sample over a longer time duration, with an improvement by factor of two expected by the conclusion of our 36 month survey.

  5. [The Brazilian Hospital Information System and the acute myocardial infarction hospital care].

    PubMed

    Escosteguy, Claudia Caminha; Portela, Margareth Crisóstomo; Medronho, Roberto de Andrade; de Vasconcellos, Maurício Teixeira Leite

    2002-08-01

    To analyze the applicability of the Brazilian Unified Health System's national hospital database to evaluate the quality of acute myocardial infarction hospital care. It was evaluated 1,936 hospital admission forms having acute myocardial infarction (AMI) as primary diagnosis in the municipal district of Rio de Janeiro, Brazil, in 1997. Data was collected from the national hospital database. A stratified random sampling of 391 medical records was also evaluated. AMI diagnosis agreement followed the literature criteria. Variable accuracy analysis was performed using kappa index agreement. The quality of AMI diagnosis registered in hospital admission forms was satisfactory according to the gold standard of the literature. In general, the accuracy of the variables demographics (sex, age group), process (medical procedures and interventions), and outcome (hospital death) was satisfactory. The accuracy of demographics and outcome variables was higher than the one of process variables. Under registration of secondary diagnosis was high in the forms and it was the main limiting factor. Given the study findings and the widespread availability of the national hospital database, it is pertinent its use as an instrument in the evaluation of the quality of AMI medical care.

  6. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Vulnerability and Tradeoffs of Dairy Farmers to the Impacts of Climate Variability and Change in India

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, A.; Gupta, J.; R, D.

    2016-12-01

    In recent years climate variability has threatened the sustainability of dairy animals and dairy farming in India. The study aims at assessing the vulnerability and tradeoffs of Dairy Based Livelihoods to Climate Variability and Change in the Western Ghat ecosystem and for this purpose; data were aggregated to an overall Livelihood Vulnerability Index (LVI) to Climate Change underlying the principles of IPCC, using 28 indicators and trade-off between vulnerability and milk production was calculated. Data were collected through Participatory Rural Appraisal and personal interviews from 360 randomly selected dairy farmers of three states of Western Ghat region, complemented by thirty years of gridded weather data and livestock data. The index score of dairy based livelihoods of many regions were negative. Lanja taluka of Maharashtra has highest level of vulnerability with overall LVI value -4.17 with 48% farmers falling in highly vulnerable category. There is also significant tradeoff between milk production and components of LVI. Thus our research will provide an important basis for policy makers to develop appropriate adaptation strategies for alarming situation and decision making for farmers to minimize the risk of dairy sector to climate variability.

  8. Effects of mobile phone exposure (GSM 900 and WCDMA/UMTS) on polysomnography based sleep quality: An intra- and inter-individual perspective.

    PubMed

    Danker-Hopfe, Heidi; Dorn, Hans; Bolz, Thomas; Peter, Anita; Hansen, Marie-Luise; Eggert, Torsten; Sauter, Cornelia

    2016-02-01

    Studies on effects of radio frequency-electromagnetic fields (RF-EMF) on the macrostructure of sleep so far yielded inconsistent results. This study investigated whether possible effects of RF-EMF exposure differ between individuals. In a double-blind, randomized, sham-controlled cross-over study possible effects of electromagnetic fields emitted by pulsed Global System for Mobile Communications (GSM) 900 and Wideband Code-Division Multiple Access (WCDMA)/Universal Mobile Telecommunications System (WCDMA/UMTS) devices on sleep were analysed. Thirty healthy young men (range 18-30 years) were exposed three times per exposure condition while their sleep was recorded. Sleep was evaluated according to the American Academy of Sleep Medicine standard and eight basic sleep variables were considered. Data analyses at the individual level indicate that RF-EMF effects are observed in 90% of the individuals and that all sleep variables are affected in at least four subjects. While sleep of participants was affected in various numbers, combinations of sleep variables and in different directions, showing improvements but also deteriorations, the only consistent finding was an increase of stage R sleep under GSM 900MHz exposure (9 of 30 subjects) as well as under WCDMA/UMTS exposure (10 of 30 subjects). The results underline that sleep of individuals can be affected differently. The observations found here may indicate an underlying thermal mechanism of RF-EMF on human REM sleep. Nevertheless, the effect of an increase in stage R sleep in one third of the individuals does not necessarily indicate a disturbance of sleep. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Variable versus conventional lung protective mechanical ventilation during open abdominal surgery: study protocol for a randomized controlled trial.

    PubMed

    Spieth, Peter M; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo

    2014-05-02

    General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary function and reduces systemic inflammatory response. However, it is currently not known whether patients undergoing open abdominal surgery might benefit from intraoperative variable ventilation. The PROtective VARiable ventilation trial ('PROVAR') is a single center, randomized controlled trial enrolling 50 patients who are planning for open abdominal surgery expected to last longer than 3 hours. PROVAR compares conventional (non-variable) lung protective ventilation (CV) with variable lung protective ventilation (VV) regarding pulmonary function and inflammatory response. The primary endpoint of the study is the forced vital capacity on the first postoperative day. Secondary endpoints include further lung function tests, plasma cytokine levels, spatial distribution of ventilation assessed by means of electrical impedance tomography and postoperative pulmonary complications. We hypothesize that VV improves lung function and reduces systemic inflammatory response compared to CV in patients receiving mechanical ventilation during general anesthesia for open abdominal surgery longer than 3 hours. PROVAR is the first randomized controlled trial aiming at intra- and postoperative effects of VV on lung function. This study may help to define the role of VV during general anesthesia requiring mechanical ventilation. Clinicaltrials.gov NCT01683578 (registered on September 3 3012).

  10. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  11. A Cautious Note on Auxiliary Variables That Can Increase Bias in Missing Data Problems.

    PubMed

    Thoemmes, Felix; Rose, Norman

    2014-01-01

    The treatment of missing data in the social sciences has changed tremendously during the last decade. Modern missing data techniques such as multiple imputation and full-information maximum likelihood are used much more frequently. These methods assume that data are missing at random. One very common approach to increase the likelihood that missing at random is achieved consists of including many covariates as so-called auxiliary variables. These variables are either included based on data considerations or in an inclusive fashion; that is, taking all available auxiliary variables. In this article, we point out that there are some instances in which auxiliary variables exhibit the surprising property of increasing bias in missing data problems. In a series of focused simulation studies, we highlight some situations in which this type of biasing behavior can occur. We briefly discuss possible ways how one can avoid selecting bias-inducing covariates as auxiliary variables.

  12. Detecting Random, Partially Random, and Nonrandom Minnesota Multiphasic Personality Inventory-2 Protocols

    ERIC Educational Resources Information Center

    Pinsoneault, Terry B.

    2007-01-01

    The ability of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher et al., 2001) validity scales to detect random, partially random, and nonrandom MMPI-2 protocols was investigated. Investigations included the Variable Response Inconsistency scale (VRIN), F, several potentially useful new F and VRIN subscales, and F-sub(b) - F…

  13. Postural stability effects of random vibration at the feet of construction workers in simulated elevation.

    PubMed

    Simeonov, P; Hsiao, H; Powers, J; Ammons, D; Kau, T; Amendola, A

    2011-07-01

    The risk of falls from height on a construction site increases under conditions which degrade workers' postural control. At elevation, workers depend heavily on sensory information from their feet to maintain balance. The study tested two hypotheses: "sensory enhancement"--sub-sensory (undetectable) random mechanical vibrations at the plantar surface of the feet can improve worker's balance at elevation; and "sensory suppression"--supra-sensory (detectable) random mechanical vibrations can have a degrading effect on balance in the same experimental settings. Six young (age 20-35) and six aging (age 45-60) construction workers were tested while standing in standard and semi-tandem postures on instrumented gel insoles. The insoles applied sub- or supra-sensory levels of random mechanical vibrations to the feet. The tests were conducted in a surround-screen virtual reality system, which simulated a narrow plank at elevation on a construction site. Upper body kinematics was assessed with a motion-measurement system. Postural stability effects were evaluated by conventional and statistical mechanics sway measures, as well as trunk angular displacement parameters. Analysis of variance did not confirm the "sensory enhancement" hypothesis, but provided evidence for the "sensory suppression" hypothesis. The supra-sensory vibration had a destabilizing effect, which was considerably stronger in the semi-tandem posture and affected most of the sway variables. Sensory suppression associated with elevated vibration levels on a construction site may increase the danger of losing balance. Construction workers at elevation, e.g., on a beam or narrow plank might be at increased risk of fall if they can detect vibrations under their feet. To reduce the possibility of losing balance, mechanical vibration to supporting structures used as walking/working surfaces should be minimized when performing construction tasks at elevation. Published by Elsevier Ltd.

  14. Timing matters: change depends on the stage of treatment in cognitive behavioral therapy for panic disorder with agoraphobia.

    PubMed

    Gloster, Andrew T; Klotsche, Jens; Gerlach, Alexander L; Hamm, Alfons; Ströhle, Andreas; Gauggel, Siegfried; Kircher, Tilo; Alpers, Georg W; Deckert, Jürgen; Wittchen, Hans-Ulrich

    2014-02-01

    The mechanisms of action underlying treatment are inadequately understood. This study examined 5 variables implicated in the treatment of panic disorder with agoraphobia (PD/AG): catastrophic agoraphobic cognitions, anxiety about bodily sensations, agoraphobic avoidance, anxiety sensitivity, and psychological flexibility. The relative importance of these process variables was examined across treatment phases: (a) psychoeducation/interoceptive exposure, (b) in situ exposure, and (c) generalization/follow-up. Data came from a randomized controlled trial of cognitive behavioral therapy for PD/AG (n = 301). Outcomes were the Panic and Agoraphobia Scale (Bandelow, 1995) and functioning as measured in the Clinical Global Impression scale (Guy, 1976). The effect of process variables on subsequent change in outcome variables was calculated using bivariate latent difference score modeling. Change in panic symptomatology was preceded by catastrophic appraisal and agoraphobic avoidance across all phases of treatment, by anxiety sensitivity during generalization/follow-up, and by psychological flexibility during exposure in situ. Change in functioning was preceded by agoraphobic avoidance and psychological flexibility across all phases of treatment, by fear of bodily symptoms during generalization/follow-up, and by anxiety sensitivity during exposure. The effects of process variables on outcomes differ across treatment phases and outcomes (i.e., symptomatology vs. functioning). Agoraphobic avoidance and psychological flexibility should be investigated and therapeutically targeted in addition to cognitive variables. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  15. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  16. Performance of DS/SSMA (Direct-Sequence Spread-Spectrum Multiple-Access) Communications in Impulsive Channels.

    DTIC Science & Technology

    1986-11-01

    mother and my brother. Their support and encouragement made this research exciting and enjoyable. I am grateful to my advisor, Professor H. Vincent Poor...the model. The m! M A variance of a random variable with density given by (A. 1) is a2 KmC 2 2A(I+l’)• (A.2) With the variance of the random variable

  17. A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    1997-01-01

    Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.

  18. On Probability Domains IV

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2017-12-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  19. A comparison of retrobulbar block, sub-Tenon block, and topical anesthesia during cataract surgery.

    PubMed

    Ryu, Jung-Hee; Kim, Minsuk; Bahk, Jae-Hyon; Do, Sang-Hwan; Cheong, Il-Young; Kim, Yong-Chul

    2009-01-01

    This randomized, double-blinded, prospective study was performed to compare the intraoperative hemodynamic variables and the patient-reported outcomes, such as intra- and postoperative analgesia and patient satisfaction, of retrobulbar block, sub-Tenon block, and topical anesthesia during cataract surgery under monitored anesthesia care. Eighty-one patients, ASA physical status I-III, undergoing elective cataract surgery under monitored anesthesia care, aged between 43 and 78 years, were randomly assigned to three groups: retrobulbar block (group R), sub-Tenon block (group S), or topical anesthesia (group T). Three minutes after the start of monitored anesthesia care with lidocaine-propofol-remifentanil mixture, an ophthalmologist performed regional anesthesia. Intraoperative hemodynamics, pain score, and patients' satisfaction with the anesthetic experiences were recorded by a study-blinded anesthesiologist. Mean arterial pressure and heart rate in group R were significantly higher than those in groups S and T during and just after the regional block (p<0.05). Group R required smaller dosage of patient controlled sedation and fewer supplemental bolus doses than groups S and T (p<0.05). On the other hand, group S showed the highest satisfaction scores among the three groups (p<0.05). Sub-Tenon block seems to be better than retrobulbar block and topical anesthesia in patient satisfaction though adequate analgesia was achieved after retrobulbar block during cataract surgery under monitored anesthesia care.

  20. Logit-normal mixed model for Indian Monsoon rainfall extremes

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-03-01

    Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.

  1. Equity in specialist waiting times by socioeconomic groups: evidence from Spain.

    PubMed

    Abásolo, Ignacio; Negrín-Hernández, Miguel A; Pinilla, Jaime

    2014-04-01

    In countries with publicly financed health care systems, waiting time--rather than price--is the rationing mechanism for access to health care services. The normative statement underlying such a rationing device is that patients should wait according to need and irrespective of socioeconomic status or other non-need characteristics. The aim of this paper is to test empirically that waiting times for publicly funded specialist care do not depend on patients' socioeconomic status. Waiting times for specialist care can vary according to the type of medical specialty, type of consultation (review or diagnosis) and the region where patients' reside. In order to take into account such variability, we use Bayesian random parameter models to explain waiting times for specialist care in terms of need and non-need variables. We find that individuals with lower education and income levels wait significantly more time than their counterparts.

  2. Spatiotemporal dynamics of random stimuli account for trial-to-trial variability in perceptual decision making

    PubMed Central

    Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.

    2016-01-01

    Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272

  3. Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs

    2017-11-01

    A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.

  4. The assisted prediction modelling frame with hybridisation and ensemble for business risk forecasting and an implementation

    NASA Astrophysics Data System (ADS)

    Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie

    2015-08-01

    The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.

  5. Tests of Hypotheses Arising In the Correlated Random Coefficient Model*

    PubMed Central

    Heckman, James J.; Schmierer, Daniel

    2010-01-01

    This paper examines the correlated random coefficient model. It extends the analysis of Swamy (1971), who pioneered the uncorrelated random coefficient model in economics. We develop the properties of the correlated random coefficient model and derive a new representation of the variance of the instrumental variable estimator for that model. We develop tests of the validity of the correlated random coefficient model against the null hypothesis of the uncorrelated random coefficient model. PMID:21170148

  6. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  7. Assessing differences in groups randomized by recruitment chain in a respondent-driven sample of Seattle-area injection drug users.

    PubMed

    Burt, Richard D; Thiede, Hanne

    2014-11-01

    Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Type 2 diabetes and cardiac autonomic neuropathy screening using dynamic pupillometry

    PubMed Central

    Lerner, Alana G.; Bernabé-Ortiz, Antonio; Ticse, Ray; Hernandez, Arturo; Huaylinos, Yvonne; Pinto, Miguel E.; Málaga, Germán; Checkley, William; Gilman, Robert H.; Miranda, J. Jaime

    2015-01-01

    Aim To determine if changes in pupillary response are useful as a screening tool for diabetes and to assess whether pupillometry is associated with cardiac autonomic neuropathy. Methods We conducted a cross-sectional study with participants drawn from two settings: a hospital and a community site. At the community site, individuals with newly diagnosed diabetes as well as a random sample of control individuals without diabetes, confirmed by oral glucose tolerance test, were selected. Participants underwent an LED light stimulus test and eight pupillometry variables were measured. Outcomes were diabetes, defined by oral glucose tolerance test, and cardiac autonomic dysfunction, determined by a positive readout on two of four diagnostic tests: heart rate response to the Valsalva manoeuvre; orthostatic hypotension; 30:15 ratio; and expiration-to-inspiration ratio. The area under the curve, best threshold, sensitivity and specificity of each pupillometry variable was calculated. Results Data from 384 people, 213 with diabetes, were analysed. The mean (±SD) age of the people with diabetes was 58.6 (±8.2) years and in the control subjects it was 56.1 (±8.6) years. When comparing individuals with and without diabetes, the amplitude of the pupil reaction had the highest area under the curve [0.69 (sensitivity: 78%; specificity: 55%)]. Cardiac autonomic neuropathy was present in 51 of the 138 people evaluated (37.0%; 95% CI 28.8–45.1). To diagnose cardiac autonomic neuropathy, two pupillometry variables had the highest area under the curve: baseline pupil radius [area under the curve: 0.71 (sensitivity: 51%; specificity: 84%)], and amplitude of the pupil reaction [area under the curve: 070 (sensitivity: 82%; specificity: 55%)]. Conclusions Pupillometry is an inexpensive technique to screen for diabetes and cardiac autonomic neuropathy, but it does not have sufficient accuracy for clinical use as a screening tool. PMID:25761508

  9. Type 2 diabetes and cardiac autonomic neuropathy screening using dynamic pupillometry.

    PubMed

    Lerner, A G; Bernabé-Ortiz, A; Ticse, R; Hernandez, A; Huaylinos, Y; Pinto, M E; Málaga, G; Checkley, W; Gilman, R H; Miranda, J J

    2015-11-01

    To determine if changes in pupillary response are useful as a screening tool for diabetes and to assess whether pupillometry is associated with cardiac autonomic neuropathy. We conducted a cross-sectional study with participants drawn from two settings: a hospital and a community site. At the community site, individuals with newly diagnosed diabetes as well as a random sample of control individuals without diabetes, confirmed by oral glucose tolerance test, were selected. Participants underwent an LED light stimulus test and eight pupillometry variables were measured. Outcomes were diabetes, defined by oral glucose tolerance test, and cardiac autonomic dysfunction, determined by a positive readout on two of four diagnostic tests: heart rate response to the Valsalva manoeuvre; orthostatic hypotension; 30:15 ratio; and expiration-to-inspiration ratio. The area under the curve, best threshold, sensitivity and specificity of each pupillometry variable was calculated. Data from 384 people, 213 with diabetes, were analysed. The mean (±sd) age of the people with diabetes was 58.6 (±8.2) years and in the control subjects it was 56.1 (±8.6) years. When comparing individuals with and without diabetes, the amplitude of the pupil reaction had the highest area under the curve [0.69 (sensitivity: 78%; specificity: 55%)]. Cardiac autonomic neuropathy was present in 51 of the 138 people evaluated (37.0%; 95% CI 28.8-45.1). To diagnose cardiac autonomic neuropathy, two pupillometry variables had the highest area under the curve: baseline pupil radius [area under the curve: 0.71 (sensitivity: 51%; specificity: 84%)], and amplitude of the pupil reaction [area under the curve: 0.70 (sensitivity: 82%; specificity: 55%)]. Pupillometry is an inexpensive technique to screen for diabetes and cardiac autonomic neuropathy, but it does not have sufficient accuracy for clinical use as a screening tool. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.

  10. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    NASA Astrophysics Data System (ADS)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  11. Variable selection under multiple imputation using the bootstrap in a prognostic study

    PubMed Central

    Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW

    2007-01-01

    Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912

  12. Spatiotemporal variability in wildfire patterns and analysis of the main drivers in Honduras using GIS and MODIS data

    NASA Astrophysics Data System (ADS)

    Valdez Vasquez, M. C.; Chen, C. F.

    2017-12-01

    Wildfires are unrestrained fires in an area of flammable vegetation and they are one of the most frequent disasters in Honduras during the dry season. During this period, anthropogenic activity combined with the harsh climatic conditions, dry vegetation and topographical variables, cause a large amount of wildfires. For this reason, there is a need to identify the drivers of wildfires and their susceptibility variations during the wildfire season. In this study, we combined the wildfire points during the 2010-2016 period every 8 days with a series of variables using the random forest (RF) algorithm. In addition to the wildfire points, we randomly generated a similar amount of background points that we use as pseudo-absence data. To represent the human imprint, we included proximity to different types of roads, trails, settlements and agriculture sites. Other variables included are the Moderate Resolution Imaging Spectra-radiometer (MODIS)-derived 8-day composites of land surface temperature (LST) and the normalized multi-band drought index (NMDI), derived from the MODIS surface reflectance data. We also included monthly average precipitation, solar radiation, and topographical variables. The exploratory analysis of the variables reveals that low precipitation combined with the low NMDI and accessibility to non-paved roads were the major drivers of wildfires during the early months of the dry season. During April, which is the peak of the dry season, the explanatory variables of relevance also included elevation and LST in addition to the proximity to paved and non-paved roads. During May, proximity to crops becomes relevant, in addition to the aforesaid variables. The average estimated area with high and very high wildfire susceptibility was 22% of the whole territory located mainly in the central and eastern regions, drifting towards the northeast areas during May. We validated the results using the area under the receiver operating characteristic (ROC) curve (AUC) for each 8-day period, and the average AUC acquired was acceptable using an independent test data. We suggest that the 8-day frequency spatiotemporal mapping of wildfire patterns and the identification of the most relevant drivers can lead to localized prevention and control actions in specific time-frames in areas of high wildfire susceptibility.

  13. Classification of California streams using combined deductive and inductive approaches: Setting the foundation for analysis of hydrologic alteration

    USGS Publications Warehouse

    Pyne, Matthew I.; Carlisle, Daren M.; Konrad, Christopher P.; Stein, Eric D.

    2017-01-01

    Regional classification of streams is an early step in the Ecological Limits of Hydrologic Alteration framework. Many stream classifications are based on an inductive approach using hydrologic data from minimally disturbed basins, but this approach may underrepresent streams from heavily disturbed basins or sparsely gaged arid regions. An alternative is a deductive approach, using watershed climate, land use, and geomorphology to classify streams, but this approach may miss important hydrological characteristics of streams. We classified all stream reaches in California using both approaches. First, we used Bayesian and hierarchical clustering to classify reaches according to watershed characteristics. Streams were clustered into seven classes according to elevation, sedimentary rock, and winter precipitation. Permutation-based analysis of variance and random forest analyses were used to determine which hydrologic variables best separate streams into their respective classes. Stream typology (i.e., the class that a stream reach is assigned to) is shaped mainly by patterns of high and mean flow behavior within the stream's landscape context. Additionally, random forest was used to determine which hydrologic variables best separate minimally disturbed reference streams from non-reference streams in each of the seven classes. In contrast to stream typology, deviation from reference conditions is more difficult to detect and is largely defined by changes in low-flow variables, average daily flow, and duration of flow. Our combined deductive/inductive approach allows us to estimate flow under minimally disturbed conditions based on the deductive analysis and compare to measured flow based on the inductive analysis in order to estimate hydrologic change.

  14. Components of variation of surface hoof strain with time.

    PubMed

    Thomason, J J; Bignell, W W; Sears, W

    2001-04-01

    The relative contribution of a number of random and fixed variables to variation in surface strain magnitudes on the hoof capsule was assessed for healthy feet under normal conditions. Principal strains were recorded in vivo from 5 rosette gauges glued around the circumference of the right forefeet of 4 horses on 4 occasions over a 9 month period. Recordings were made at every other trimming and reshoeing. During each session, gauges were positioned with a template for repeatability. Strains were recorded at the trot and canter (at consistent speeds), for straight motion and turns, and before and after the hoof was trimmed and reset. Up to 30 strides were recorded for each combination of these variables. ANOVAs were performed on midstance strains of 7008 strides to determine the relative contributions to strain variation of individual horse, test day, gait and direction combined as one factor, gauge position on the hoof, trimming, interstride variability and the interactions among these factors. The ANOVA model explained 87% of the variation, of which approximately 84% was due to fixed effects and 16% to random effects. Circumferential position of the gauges and several of the interactions including this term were by far the greatest contributors to strain variation. Differences among gauge positions, individuals and gait + direction are consistent with previous work. This study has added the relative effects, which are small but significant, of trimming on a regular basis and of time. The change in strain magnitudes with trimming was different for each horse, which leads to the possibility that over- and underuse may have to be quantified on an individual basis.

  15. What variables are important in predicting bovine viral diarrhea virus? A random forest approach.

    PubMed

    Machado, Gustavo; Mendoza, Mariana Recamonde; Corbellini, Luis Gustavo

    2015-07-24

    Bovine viral diarrhea virus (BVDV) causes one of the most economically important diseases in cattle, and the virus is found worldwide. A better understanding of the disease associated factors is a crucial step towards the definition of strategies for control and eradication. In this study we trained a random forest (RF) prediction model and performed variable importance analysis to identify factors associated with BVDV occurrence. In addition, we assessed the influence of features selection on RF performance and evaluated its predictive power relative to other popular classifiers and to logistic regression. We found that RF classification model resulted in an average error rate of 32.03% for the negative class (negative for BVDV) and 36.78% for the positive class (positive for BVDV).The RF model presented area under the ROC curve equal to 0.702. Variable importance analysis revealed that important predictors of BVDV occurrence were: a) who inseminates the animals, b) number of neighboring farms that have cattle and c) rectal palpation performed routinely. Our results suggest that the use of machine learning algorithms, especially RF, is a promising methodology for the analysis of cross-sectional studies, presenting a satisfactory predictive power and the ability to identify predictors that represent potential risk factors for BVDV investigation. We examined classical predictors and found some new and hard to control practices that may lead to the spread of this disease within and among farms, mainly regarding poor or neglected reproduction management, which should be considered for disease control and eradication.

  16. A NARX damper model for virtual tuning of automotive suspension systems with high-frequency loading

    NASA Astrophysics Data System (ADS)

    Alghafir, M. N.; Dunne, J. F.

    2012-02-01

    A computationally efficient NARX-type neural network model is developed to characterise highly nonlinear frequency-dependent thermally sensitive hydraulic dampers for use in the virtual tuning of passive suspension systems with high-frequency loading. Three input variables are chosen to account for high-frequency kinematics and temperature variations arising from continuous vehicle operation over non-smooth surfaces such as stone-covered streets, rough or off-road conditions. Two additional input variables are chosen to represent tuneable valve parameters. To assist in the development of the NARX model, a highly accurate but computationally excessive physical damper model [originally proposed by S. Duym and K. Reybrouck, Physical characterization of non-linear shock absorber dynamics, Eur. J. Mech. Eng. M 43(4) (1998), pp. 181-188] is extended to allow for high-frequency input kinematics. Experimental verification of this extended version uses measured damper data obtained from an industrial damper test machine under near-isothermal conditions for fixed valve settings, with input kinematics corresponding to harmonic and random road profiles. The extended model is then used only for simulating data for training and testing the NARX model with specified temperature profiles and different valve parameters, both in isolation and within quarter-car vehicle simulations. A heat generation and dissipation model is also developed and experimentally verified for use within the simulations. Virtual tuning using the quarter-car simulation model then exploits the NARX damper to achieve a compromise between ride and handling under transient thermal conditions with harmonic and random road profiles. For quarter-car simulations, the paper shows that a single tuneable NARX damper makes virtual tuning computationally very attractive.

  17. Normalization of flow-mediated dilation to shear stress area under the curve eliminates the impact of variable hyperemic stimulus.

    PubMed

    Padilla, Jaume; Johnson, Blair D; Newcomer, Sean C; Wilhite, Daniel P; Mickleborough, Timothy D; Fly, Alyce D; Mather, Kieren J; Wallace, Janet P

    2008-09-04

    Normalization of brachial artery flow-mediated dilation (FMD) to individual shear stress area under the curve (peak FMD:SSAUC ratio) has recently been proposed as an approach to control for the large inter-subject variability in reactive hyperemia-induced shear stress; however, the adoption of this approach among researchers has been slow. The present study was designed to further examine the efficacy of FMD normalization to shear stress in reducing measurement variability. Five different magnitudes of reactive hyperemia-induced shear stress were applied to 20 healthy, physically active young adults (25.3 +/- 0. 6 yrs; 10 men, 10 women) by manipulating forearm cuff occlusion duration: 1, 2, 3, 4, and 5 min, in a randomized order. A venous blood draw was performed for determination of baseline whole blood viscosity and hematocrit. The magnitude of occlusion-induced forearm ischemia was quantified by dual-wavelength near-infrared spectrometry (NIRS). Brachial artery diameters and velocities were obtained via high-resolution ultrasound. The SSAUC was individually calculated for the duration of time-to-peak dilation. One-way repeated measures ANOVA demonstrated distinct magnitudes of occlusion-induced ischemia (volume and peak), hyperemic shear stress, and peak FMD responses (all p < 0.0001) across forearm occlusion durations. Differences in peak FMD were abolished when normalizing FMD to SSAUC (p = 0.785). Our data confirm that normalization of FMD to SSAUC eliminates the influences of variable shear stress and solidifies the utility of FMD:SSAUC ratio as an index of endothelial function.

  18. Naltrexone and Cognitive Behavioral Therapy for the Treatment of Alcohol Dependence

    PubMed Central

    Baros, AM; Latham, PK; Anton, RF

    2008-01-01

    Background Sex differences in regards to pharmacotherapy for alcoholism is a topic of concern following publications suggesting naltrexone, one of the longest approved treatments of alcoholism, is not as effective in women as in men. This study was conducted by combining two randomized placebo controlled clinical trials utilizing similar methodologies and personnel in which the data was amalgamated to evaluate sex effects in a reasonable sized sample. Methods 211 alcoholics (57 female; 154 male) were randomized to the naltrexone/CBT or placebo/CBT arm of the two clinical trials analyzed. Baseline variables were examined for differences between sex and treatment groups via analysis of variance (ANOVA) for continuous variable or chi-square test for categorical variables. All initial outcome analysis was conducted under an intent-to-treat analysis plan. Effect sizes for naltrexone over placebo were determined by Cohen’s D (d). Results The effect size of naltrexone over placebo for the following outcome variables was similar in men and women (%days abstinent (PDA) d=0.36, %heavy drinking days (PHDD) d=0.36 and total standard drinks (TSD) d=0.36). Only for men were the differences significant secondary to the larger sample size (PDA p=0.03; PHDD p=0.03; TSD p=0.04). There were a few variables (GGT at wk-12 change from baseline to week-12: men d=0.36, p=0.05; women d=0.20, p=0.45 and drinks per drinking day: men d=0.36, p=0.05; women d=0.28, p=0.34) where the naltrexone effect size for men was greater than women. In women, naltrexone tended to increase continuous abstinent days before a first drink (women d-0.46, p=0.09; men d=0.00, p=0.44). Conclusions The effect size of naltrexone over placebo appeared similar in women and men in our hands suggesting the findings of sex differences in naltrexone response might have to do with sample size and/or endpoint drinking variables rather than any inherent pharmacological or biological differences in response. PMID:18336635

  19. Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation Conditions

    DTIC Science & Technology

    2009-03-01

    IN WIRELESS SENSOR NETWORKS WITH RANDOMLY DISTRIBUTED ELEMENTS UNDER MULTIPATH PROPAGATION CONDITIONS by Georgios Tsivgoulis March 2009...COVERED Engineer’s Thesis 4. TITLE Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation...the non-line-of-sight information. 15. NUMBER OF PAGES 111 14. SUBJECT TERMS Wireless Sensor Network , Direction of Arrival, DOA, Random

  20. Screening large-scale association study data: exploiting interactions using random forests.

    PubMed

    Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul

    2004-12-10

    Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.

  1. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part I: Fundamentals

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.

  2. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  3. The Effect of Live Spontaneous Harp Music on Patients in the Intensive Care Unit

    PubMed Central

    Chiasson, Ann Marie; Linda Baldwin, Ann; Mclaughlin, Carrol; Cook, Paula; Sethi, Gulshan

    2013-01-01

    This study was performed to investigate the effect of live, spontaneous harp music on individual patients in an intensive care unit (ICU), either pre- or postoperatively. The purpose was to determine whether this intervention would serve as a relaxation or healing modality, as evidenced by the effect on patient's pain, heart rate, respiratory rate, blood pressure, oxygen saturation, and heart rate variability. Each consenting patient was randomly assigned to receive either a live 10-minute concert of spontaneous music played by an expert harpist or a 10-minute rest period. Spontaneous harp music significantly decreased patient perception of pain by 27% but did not significantly affect heart rate, respiratory rate, oxygen saturation, blood pressure, or heart rate variability. Trends emerged, although being not statistically significant, that systolic blood pressure increased while heart rate variability decreased. These findings may invoke patient engagement, as opposed to relaxation, as the underlying mechanism of the decrease in the patients' pain and of the healing benefit that arises from the relationship between healer, healing modality, and patient. PMID:24371459

  4. Probabilistic evaluation of fuselage-type composite structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    A methodology is developed to computationally simulate the uncertain behavior of composite structures. The uncertain behavior includes buckling loads, natural frequencies, displacements, stress/strain etc., which are the consequences of the random variation (scatter) of the primitive (independent random) variables in the constituent, ply, laminate and structural levels. This methodology is implemented in the IPACS (Integrated Probabilistic Assessment of Composite Structures) computer code. A fuselage-type composite structure is analyzed to demonstrate the code's capability. The probability distribution functions of the buckling loads, natural frequency, displacement, strain and stress are computed. The sensitivity of each primitive (independent random) variable to a given structural response is also identified from the analyses.

  5. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  6. Financial Management of a Large Multi-site Randomized Clinical Trial

    PubMed Central

    Sheffet, Alice J.; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E.; Longbottom, Mary E.; Howard, Virginia J.; Marler, John R.; Brott, Thomas G.

    2014-01-01

    Background The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years’ funding ($21,112,866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2,500 randomized participants at 40 sites. Aims Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Methods Projections of the original grant’s fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant’s fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Results Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2,500 targeted sample size, 138 (5.5%) were randomized during the first five years and 1,387 (55.5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13,845) of the projected per-patient costs ($152,992) of the fixed model. Conclusions Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. PMID:24661748

  7. Financial management of a large multisite randomized clinical trial.

    PubMed

    Sheffet, Alice J; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E; Longbottom, Mary E; Howard, Virginia J; Marler, John R; Brott, Thomas G

    2014-08-01

    The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years' funding ($21 112 866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2500 randomized participants at 40 sites. Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Projections of the original grant's fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant's fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2500 targeted sample size, 138 (5·5%) were randomized during the first five years and 1387 (55·5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13 845) of the projected per-patient costs ($152 992) of the fixed model. Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. © 2014 The Authors. International Journal of Stroke © 2014 World Stroke Organization.

  8. The comparison of landslide ratio-based and general logistic regression landslide susceptibility models in the Chishan watershed after 2009 Typhoon Morakot

    NASA Astrophysics Data System (ADS)

    WU, Chunhung

    2015-04-01

    The research built the original logistic regression landslide susceptibility model (abbreviated as or-LRLSM) and landslide ratio-based ogistic regression landslide susceptibility model (abbreviated as lr-LRLSM), compared the performance and explained the error source of two models. The research assumes that the performance of the logistic regression model can be better if the distribution of landslide ratio and weighted value of each variable is similar. Landslide ratio is the ratio of landslide area to total area in the specific area and an useful index to evaluate the seriousness of landslide disaster in Taiwan. The research adopted the landside inventory induced by 2009 Typhoon Morakot in the Chishan watershed, which was the most serious disaster event in the last decade, in Taiwan. The research adopted the 20 m grid as the basic unit in building the LRLSM, and six variables, including elevation, slope, aspect, geological formation, accumulated rainfall, and bank erosion, were included in the two models. The six variables were divided as continuous variables, including elevation, slope, and accumulated rainfall, and categorical variables, including aspect, geological formation and bank erosion in building the or-LRLSM, while all variables, which were classified based on landslide ratio, were categorical variables in building the lr-LRLSM. Because the count of whole basic unit in the Chishan watershed was too much to calculate by using commercial software, the research took random sampling instead of the whole basic units. The research adopted equal proportions of landslide unit and not landslide unit in logistic regression analysis. The research took 10 times random sampling and selected the group with the best Cox & Snell R2 value and Nagelkerker R2 value as the database for the following analysis. Based on the best result from 10 random sampling groups, the or-LRLSM (lr-LRLSM) is significant at the 1% level with Cox & Snell R2 = 0.190 (0.196) and Nagelkerke R2 = 0.253 (0.260). The unit with the landslide susceptibility value > 0.5 (≦ 0.5) will be classified as a predicted landslide unit (not landslide unit). The AUC, i.e. the area under the relative operating characteristic curve, of or-LRLSM in the Chishan watershed is 0.72, while that of lr-LRLSM is 0.77. Furthermore, the average correct ratio of lr-LRLSM (73.3%) is better than that of or-LRLSM (68.3%). The research analyzed in detail the error sources from the two models. In continuous variables, using the landslide ratio-based classification in building the lr-LRLSM can let the distribution of weighted value more similar to distribution of landslide ratio in the range of continuous variable than that in building the or-LRLSM. In categorical variables, the meaning of using the landslide ratio-based classification in building the lr-LRLSM is to gather the parameters with approximate landslide ratio together. The mean correct ratio in continuous variables (categorical variables) by using the lr-LRLSM is better than that in or-LRLSM by 0.6 ~ 2.6% (1.7% ~ 6.0%). Building the landslide susceptibility model by using landslide ratio-based classification is practical and of better performance than that by using the original logistic regression.

  9. Visualizing Time-Varying Distribution Data in EOS Application

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei

    2004-01-01

    In this research, we have developed several novel visualization methods for spatial probability density function data. Our focus has been on 2D spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We developed novel clustering algorithms as a means to reduce the information contained in these datasets; and investigated different ways of interpreting and clustering the data.

  10. Tacholess order-tracking approach for wind turbine gearbox fault detection

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Xie, Yong; Xu, Guanghua; Zhang, Sicong; Hou, Chenggang

    2017-09-01

    Monitoring of wind turbines under variable-speed operating conditions has become an important issue in recent years. The gearbox of a wind turbine is the most important transmission unit; it generally exhibits complex vibration signatures due to random variations in operating conditions. Spectral analysis is one of the main approaches in vibration signal processing. However, spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions. This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications. Although order-tracking methods have been proposed for wind turbine fault detection in recent years, current methods are only applicable to cases in which the instantaneous shaft phase is available. For wind turbines with limited structural spaces, collecting phase signals with tachometers or encoders is difficult. In this study, a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques. The proposed method extracts the instantaneous phase from the vibration signal, resamples the signal at equiangular increments, and calculates the order spectrum for wind turbine fault identification. The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.

  11. Complexity of mechanisms among human proprotein convertase subtilisin-kexin type 9 variants.

    PubMed

    Dron, Jacqueline S; Hegele, Robert A

    2017-04-01

    There are many reports of human variants in proprotein convertase subtilisin-kexin type 9 (PCSK9) that are either gain-of-function (GOF) or loss-of-function (LOF), with downstream effects on LDL cholesterol and cardiovascular disease (CVD) risk. However, data on particular mechanisms have only been minimally curated. GOF variants are individually ultrarare, affect all domains of the protein, act to reduce LDL receptor expression through several mechanisms, are a minor cause of familial hypercholesterolemia, have been reported mainly within families, have variable LDL cholesterol-raising effects, and are associated with increased CVD risk mainly through observational studies in families and small cohorts. In contrast, LOF variants can be either ultrarare mutations or relatively more common polymorphisms seen in populations, affect all domains of the protein, act to increase LDL receptor expression through several mechanisms, have variable LDL cholesterol-lowering effects, and have been associated with decreased CVD risk mainly through Mendelian randomization studies in epidemiologic populations. There is considerable complexity underlying the clinical concept of both LOF and GOF variants of PCSK9. But despite the underlying mechanistic heterogeneity, altered PCSK9 secretion or function is ultimately correlated with plasma LDL cholesterol level, which is also the driver of CVD outcomes.

  12. Transfer effects of fall training on balance performance and spatiotemporal gait parameters in healthy community-dwelling older adults: a pilot study.

    PubMed

    Donath, Lars; Faude, Oliver; Bridenbaugh, Stephanie A; Roth, Ralf; Soltermann, Martin; Kressig, Reto W; Zahner, Lukas

    2014-07-01

    This study examined transfer effects of fall training on fear of falling (Falls Efficacy Scale-International [FES-I]), balance performance, and spatiotemporal gait characteristics in older adults. Eighteen community-dwelling older adults (ages 65-85) were randomly assigned to an intervention or control group. The intervention group completed 12 training sessions (60 min, 6 weeks). During pre- and posttesting, we measured FES-I, balance performance (double limb, closed eyes; single limb, open eyes; double limb, open eyes with motor-interfered task), and gait parameters (e.g., velocity; cadence; stride time, stride width, and stride length; variability of stride time and stride length) under single- and motor-interfered tasks. Dual tasks were applied to appraise improvements of cognitive processing during balance and gait. FES-I (p = .33) and postural sway did not significantly change (0.36 < p < .79). Trends toward significant interaction effects were found for step width during normal walking and stride length variability during the motor dual task (p = .05, ηp 2 = .22). Fall training did not sufficiently improve fear of falling, balance, or gait performance under single- or dual-task conditions in healthy older adults.

  13. Circadian and Sex Differences After Acute High-Altitude Exposure: Are Early Acclimation Responses Improved by Blue Light?

    PubMed

    Silva-Urra, Juan A; Núñez-Espinosa, Cristian A; Niño-Mendez, Oscar A; Gaitán-Peñas, Héctor; Altavilla, Cesare; Toro-Salinas, Andrés; Torrella, Joan R; Pagès, Teresa; Javierre, Casimiro F; Behn, Claus; Viscor, Ginés

    2015-12-01

    The possible effects of blue light during acute hypoxia and the circadian rhythm on several physiological and cognitive parameters were studied. Fifty-seven volunteers were randomly assigned to 2 groups: nocturnal (2200-0230 hours) or diurnal (0900-1330 hours) and exposed to acute hypoxia (4000 m simulated altitude) in a hypobaric chamber. The participants were illuminated by blue LEDs or common artificial light on 2 different days. During each session, arterial oxygen saturation (Spo2), blood pressure, heart rate variability, and cognitive parameters were measured at sea level, after reaching the simulated altitude of 4000 m, and after 3 hours at this altitude. The circadian rhythm caused significant differences in blood pressure and heart rate variability. A 4% to 9% decrease in waking nocturnal Spo2 under acute hypoxia was observed. Acute hypoxia also induced a significant reduction (4%-8%) in systolic pressure, slightly more marked (up to 13%) under blue lighting. Women had significantly increased systolic (4%) and diastolic (12%) pressures under acute hypoxia at night compared with daytime pressure; this was not observed in men. Some tendencies toward better cognitive performance (d2 attention test) were seen under blue illumination, although when considered together with physiological parameters and reaction time, there was no conclusive favorable effect of blue light on cognitive fatigue suppression after 3 hours of acute hypobaric hypoxia. It remains to be seen whether longer exposure to blue light under hypobaric hypoxic conditions would induce favorable effects against fatigue. Copyright © 2015 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.

  14. Probabilistic solutions of nonlinear oscillators excited by combined colored and white noise excitations

    NASA Astrophysics Data System (ADS)

    Siu-Siu, Guo; Qingxuan, Shi

    2017-03-01

    In this paper, single-degree-of-freedom (SDOF) systems combined to Gaussian white noise and Gaussian/non-Gaussian colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations with four-coupled first-order differential equations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions, especially the ones in the tail regions of the PDFs. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis. Hopefully, our present work could provide insights into the investigation of structures under random loadings.

  15. Strategic Use of Random Subsample Replication and a Coefficient of Factor Replicability

    ERIC Educational Resources Information Center

    Katzenmeyer, William G.; Stenner, A. Jackson

    1975-01-01

    The problem of demonstrating replicability of factor structure across random variables is addressed. Procedures are outlined which combine the use of random subsample replication strategies with the correlations between factor score estimates across replicate pairs to generate a coefficient of replicability and confidence intervals associated with…

  16. Simulation of the Effects of Random Measurement Errors

    ERIC Educational Resources Information Center

    Kinsella, I. A.; Hannaidh, P. B. O.

    1978-01-01

    Describes a simulation method for measurement of errors that requires calculators and tables of random digits. Each student simulates the random behaviour of the component variables in the function and by combining the results of all students, the outline of the sampling distribution of the function can be obtained. (GA)

  17. Stochastic analysis of uncertain thermal parameters for random thermal regime of frozen soil around a single freezing pipe

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei

    2018-03-01

    The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.

  18. Compiling probabilistic, bio-inspired circuits on a field programmable analog array

    PubMed Central

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  19. Can adverse maternal and perinatal outcomes be predicted when blood pressure becomes elevated? Secondary analyses from the CHIPS (Control of Hypertension In Pregnancy Study) randomized controlled trial.

    PubMed

    Magee, Laura A; von Dadelszen, Peter; Singer, Joel; Lee, Terry; Rey, Evelyne; Ross, Susan; Asztalos, Elizabeth; Murphy, Kellie E; Menzies, Jennifer; Sanchez, Johanna; Gafni, Amiram; Gruslin, Andrée; Helewa, Michael; Hutton, Eileen; Lee, Shoo K; Logan, Alexander G; Ganzevoort, Wessel; Welch, Ross; Thornton, Jim G; Moutquin, Jean Marie

    2016-07-01

    For women with chronic or gestational hypertension in CHIPS (Control of Hypertension In Pregnancy Study, NCT01192412), we aimed to examine whether clinical predictors collected at randomization could predict adverse outcomes. This was a planned, secondary analysis of data from the 987 women in the CHIPS Trial. Logistic regression was used to examine the impact of 19 candidate predictors on the probability of adverse perinatal (pregnancy loss or high level neonatal care for >48 h, or birthweight <10th percentile) or maternal outcomes (severe hypertension, preeclampsia, or delivery at <34 or <37 weeks). A model containing all candidate predictors was used to start the stepwise regression process based on goodness of fit as measured by the Akaike information criterion. For face validity, these variables were forced into the model: treatment group ("less tight" or "tight" control), antihypertensive type at randomization, and blood pressure within 1 week before randomization. Continuous variables were represented continuously or dichotomized based on the smaller p-value in univariate analyses. An area-under-the-receiver-operating-curve (AUC ROC) of ≥0.70 was taken to reflect a potentially useful model. Point estimates for AUC ROC were <0.70 for all but severe hypertension (0.70, 95% CI 0.67-0.74) and delivery at <34 weeks (0.71, 95% CI 0.66-0.75). Therefore, no model warranted further assessment of performance. CHIPS data suggest that when women with chronic hypertension develop an elevated blood pressure in pregnancy, or formerly normotensive women develop new gestational hypertension, maternal and current pregnancy clinical characteristics cannot predict adverse outcomes in the index pregnancy. © 2016 The Authors. Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).

  20. Evaluation of Electromyographic Biofeedback for the Quadriceps Femoris: A Systematic Review

    PubMed Central

    Wasielewski, Noah J.; Parker, Tonya M.; Kotsko, Kevin M.

    2011-01-01

    Objective: To critically review evidence for the effectiveness of electromyographic biofeedback (EMGB) of the quadriceps femoris muscle in treating various knee conditions. Data Sources: Databases used to locate randomized controlled trials included PubMed (1980–2010), Cumulative Index of Nursing and Allied Health Literature (CINAHL, 1995–2007), Web of Science (1986–2010), SPORTDiscus (1990–2007), and Physiotherapy Evidence Database (PEDro). Key words were knee and biofeedback. Study Selection: The criteria for selection were clinical randomized controlled trials in which EMGB of the quadriceps femoris was used for various knee conditions of musculoskeletal origin. Trials were excluded because of research designs other than randomized controlled trials, articles published in a non-English language, inclusion of healthy research participants, inability to identify EMGB as the source of clinical improvement, and lack of pain, functional outcome, or quadriceps torque as outcome measures. Data Extraction: Twenty specific data points were abstracted from each clinical trial under the broad categories of attributes of the patient and injury, treatment variables for the EMGB group, treatment variables for the control group, and attributes of the research design. Data Synthesis: Eight trials yielded a total of 319 participants with patellofemoral pain syndrome (n = 86), anterior cruciate ligament reconstruction (n = 52), arthroscopic surgery (n = 91), or osteoarthritis (n = 90). The average methodologic score of the included studies was 4.6/10 based on PEDro criteria. Pooled analyses demonstrated heterogeneity of the included studies, rendering the interpretation of the pooled data inappropriate. The EMGB appeared to benefit short-term postsurgical pain or quadriceps strength in 3 of 4 postsurgical investigations but was ineffective for chronic knee conditions such as patellofemoral pain and osteoarthritis in all 4 studies. Because the findings are based on limited data, caution is warranted until more randomized controlled trials are conducted to support or refute the general trends observed in this report. PMID:22488142

  1. SETI and SEH (Statistical Equation for Habitables)

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-01-01

    The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.

  2. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  3. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  4. Design and implementation of a dental caries prevention trial in remote Canadian Aboriginal communities.

    PubMed

    Harrison, Rosamund; Veronneau, Jacques; Leroux, Brian

    2010-05-13

    The goal of this cluster randomized trial is to test the effectiveness of a counseling approach, Motivational Interviewing, to control dental caries in young Aboriginal children. Motivational Interviewing, a client-centred, directive counseling style, has not yet been evaluated as an approach for promotion of behaviour change in indigenous communities in remote settings. Aboriginal women were hired from the 9 communities to recruit expectant and new mothers to the trial, administer questionnaires and deliver the counseling to mothers in the test communities. The goal is for mothers to receive the intervention during pregnancy and at their child's immunization visits. Data on children's dental health status and family dental health practices will be collected when children are 30-months of age. The communities were randomly allocated to test or control group by a random "draw" over community radio. Sample size and power were determined based on an anticipated 20% reduction in caries prevalence. Randomization checks were conducted between groups. In the 5 test and 4 control communities, 272 of the original target sample size of 309 mothers have been recruited over a two-and-a-half year period. A power calculation using the actual attained sample size showed power to be 79% to detect a treatment effect. If an attrition fraction of 4% per year is maintained, power will remain at 80%. Power will still be > 90% to detect a 25% reduction in caries prevalence. The distribution of most baseline variables was similar for the two randomized groups of mothers. However, despite the random assignment of communities to treatment conditions, group differences exist for stage of pregnancy and prior tooth extractions in the family. Because of the group imbalances on certain variables, control of baseline variables will be done in the analyses of treatment effects. This paper explains the challenges of conducting randomized trials in remote settings, the importance of thorough community collaboration, and also illustrates the likelihood that some baseline variables that may be clinically important will be unevenly split in group-randomized trials when the number of groups is small. This trial is registered as ISRCTN41467632.

  5. Design and implementation of a dental caries prevention trial in remote Canadian Aboriginal communities

    PubMed Central

    2010-01-01

    Background The goal of this cluster randomized trial is to test the effectiveness of a counseling approach, Motivational Interviewing, to control dental caries in young Aboriginal children. Motivational Interviewing, a client-centred, directive counseling style, has not yet been evaluated as an approach for promotion of behaviour change in indigenous communities in remote settings. Methods/design Aboriginal women were hired from the 9 communities to recruit expectant and new mothers to the trial, administer questionnaires and deliver the counseling to mothers in the test communities. The goal is for mothers to receive the intervention during pregnancy and at their child's immunization visits. Data on children's dental health status and family dental health practices will be collected when children are 30-months of age. The communities were randomly allocated to test or control group by a random "draw" over community radio. Sample size and power were determined based on an anticipated 20% reduction in caries prevalence. Randomization checks were conducted between groups. Discussion In the 5 test and 4 control communities, 272 of the original target sample size of 309 mothers have been recruited over a two-and-a-half year period. A power calculation using the actual attained sample size showed power to be 79% to detect a treatment effect. If an attrition fraction of 4% per year is maintained, power will remain at 80%. Power will still be > 90% to detect a 25% reduction in caries prevalence. The distribution of most baseline variables was similar for the two randomized groups of mothers. However, despite the random assignment of communities to treatment conditions, group differences exist for stage of pregnancy and prior tooth extractions in the family. Because of the group imbalances on certain variables, control of baseline variables will be done in the analyses of treatment effects. This paper explains the challenges of conducting randomized trials in remote settings, the importance of thorough community collaboration, and also illustrates the likelihood that some baseline variables that may be clinically important will be unevenly split in group-randomized trials when the number of groups is small. Trial registration This trial is registered as ISRCTN41467632. PMID:20465831

  6. Variable training does not lead to better motor learning compared to repetitive training in children with and without DCD when exposed to active video games.

    PubMed

    Bonney, Emmanuel; Jelsma, Dorothee; Ferguson, Gillian; Smits-Engelsman, Bouwien

    2017-03-01

    Little is known about the influence of practice schedules on motor learning and skills transfer in children with and without developmental coordination disorder (DCD). Understanding how practice schedules affect motor learning is necessary for motor skills development and rehabilitation. The study investigated whether active video games (exergames) training delivered under variable practice led to better learning and transfer than repetitive practice. 111 children aged 6-10 years (M=8.0, SD=1.0) with no active exergaming experience were randomized to receive exergames training delivered under variable (Variable Game Group (VGG), n=56) or repetitive practice schedule (Repetitive Game Group (RGG), n=55). Half the participants were identified as DCD using the DSM-5 criteria, while the rest were typically developing (TD), age-matched children. Both groups participated in two 20min sessions per week for 5 weeks. Both participant groups (TD and DCD) improved equally well on game performance. There was no significant difference in positive transfer to balance tasks between practice schedules (Repetitive and Variable) and participant groups (TD and DCD). Children with and without DCD learn balance skills quite well when exposed to exergames. Gains in learning and transfer are similar regardless of the form of practice schedule employed. This is the first paper to compare the effect of practice schedules on learning in children with DCD and those with typical development. No differences in motor learning were found between repetitive and variable practice schedules. When children with and without DCD spend the same amount of time on exergames, they do not show any differences in acquisition of motor skills. Transfer of motor skills is similar in children with and without DCD regardless of differences in practice schedules. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Random trinomial tree models and vanilla options

    NASA Astrophysics Data System (ADS)

    Ganikhodjaev, Nasir; Bayram, Kamola

    2013-09-01

    In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.

  8. Characterizing ISI and sub-threshold membrane potential distributions: Ensemble of IF neurons with random squared-noise intensity.

    PubMed

    Kumar, Sanjeev; Karmeshu

    2018-04-01

    A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    PubMed

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  10. Effectiveness of a primary care-based intervention to reduce sitting time in overweight and obese patients (SEDESTACTIV): a randomized controlled trial; rationale and study design

    PubMed Central

    2014-01-01

    Background There is growing evidence suggesting that prolonged sitting has negative effects on people’s weight, chronic diseases and mortality. Interventions to reduce sedentary time can be an effective strategy to increase daily energy expenditure. The purpose of this study is to evaluate the effectiveness of a six-month primary care intervention to reduce daily of sitting time in overweight and mild obese sedentary patients. Method/Design The study is a randomized controlled trial (RCT). Professionals from thirteen primary health care centers (PHC) will randomly invite to participate mild obese or overweight patients of both gender, aged between 25 and 65 years old, who spend 6 hours at least daily sitting. A total of 232 subjects will be randomly allocated to an intervention (IG) and control group (CG) (116 individuals each group). In addition, 50 subjects with fibromyalgia will be included. Primary outcome is: (1) sitting time using the activPAL device and the Marshall questionnaire. The following parameters will be also assessed: (2) sitting time in work place (Occupational Sitting and Physical Activity Questionnaire), (3) health-related quality of life (EQ-5D), (4) evolution of stage of change (Prochaska and DiClemente's Stages of Change Model), (5) physical inactivity (catalan version of Brief Physical Activity Assessment Tool), (6) number of steps walked (pedometer and activPAL), (7) control based on analysis (triglycerides, total cholesterol, HDL, LDL, glycemia and, glycated haemoglobin in diabetic patients) and (8) blood pressure and anthropometric variables. All parameters will be assessed pre and post intervention and there will be a follow up three, six and twelve months after the intervention. A descriptive analysis of all variables and a multivariate analysis to assess differences among groups will be undertaken. Multivariate analysis will be carried out to assess time changes of dependent variables. All the analysis will be done under the intention to treat principle. Discussion If the SEDESTACTIV intervention shows its effectiveness in reducing sitting time, health professionals would have a low-cost intervention tool for sedentary overweight and obese patients management. Trial registration A service of the U.S. National Institutes of Health. Developed by the National Library of Medicine. ClinicalTrials.gov NCT01729936 PMID:24597534

  11. Effectiveness of a primary care-based intervention to reduce sitting time in overweight and obese patients (SEDESTACTIV): a randomized controlled trial; rationale and study design.

    PubMed

    Martín-Borràs, Carme; Giné-Garriga, Maria; Martínez, Elena; Martín-Cantera, Carlos; Puigdoménech, Elisa; Solà, Mercè; Castillo, Eva; Beltrán, Angela Ma; Puig-Ribera, Anna; Trujillo, José Manuel; Pueyo, Olga; Pueyo, Javier; Rodríguez, Beatriz; Serra-Paya, Noemí

    2014-03-05

    There is growing evidence suggesting that prolonged sitting has negative effects on people's weight, chronic diseases and mortality. Interventions to reduce sedentary time can be an effective strategy to increase daily energy expenditure. The purpose of this study is to evaluate the effectiveness of a six-month primary care intervention to reduce daily of sitting time in overweight and mild obese sedentary patients. The study is a randomized controlled trial (RCT). Professionals from thirteen primary health care centers (PHC) will randomly invite to participate mild obese or overweight patients of both gender, aged between 25 and 65 years old, who spend 6 hours at least daily sitting. A total of 232 subjects will be randomly allocated to an intervention (IG) and control group (CG) (116 individuals each group). In addition, 50 subjects with fibromyalgia will be included.Primary outcome is: (1) sitting time using the activPAL device and the Marshall questionnaire. The following parameters will be also assessed: (2) sitting time in work place (Occupational Sitting and Physical Activity Questionnaire), (3) health-related quality of life (EQ-5D), (4) evolution of stage of change (Prochaska and DiClemente's Stages of Change Model), (5) physical inactivity (catalan version of Brief Physical Activity Assessment Tool), (6) number of steps walked (pedometer and activPAL), (7) control based on analysis (triglycerides, total cholesterol, HDL, LDL, glycemia and, glycated haemoglobin in diabetic patients) and (8) blood pressure and anthropometric variables. All parameters will be assessed pre and post intervention and there will be a follow up three, six and twelve months after the intervention. A descriptive analysis of all variables and a multivariate analysis to assess differences among groups will be undertaken. Multivariate analysis will be carried out to assess time changes of dependent variables. All the analysis will be done under the intention to treat principle. If the SEDESTACTIV intervention shows its effectiveness in reducing sitting time, health professionals would have a low-cost intervention tool for sedentary overweight and obese patients management. A service of the U.S. National Institutes of Health. Developed by the National Library of Medicine. ClinicalTrials.gov NCT01729936.

  12. Branching random walk with step size coming from a power law

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ayan; Subhra Hazra, Rajat; Roy, Parthanil

    2015-09-01

    In their seminal work, Brunet and Derrida made predictions on the random point configurations associated with branching random walks. We shall discuss the limiting behavior of such point configurations when the displacement random variables come from a power law. In particular, we establish that two prediction of remains valid in this setup and investigate various other issues mentioned in their paper.

  13. Effect of study design on the reported effect of cardiac resynchronization therapy (CRT) on quantitative physiological measures: stratified meta-analysis in narrow-QRS heart failure and implications for planning future studies.

    PubMed

    Jabbour, Richard J; Shun-Shin, Matthew J; Finegold, Judith A; Afzal Sohaib, S M; Cook, Christopher; Nijjer, Sukhjinder S; Whinnett, Zachary I; Manisty, Charlotte H; Brugada, Josep; Francis, Darrel P

    2015-01-06

    Biventricular pacing (CRT) shows clear benefits in heart failure with wide QRS, but results in narrow QRS have appeared conflicting. We tested the hypothesis that study design might have influenced findings. We identified all reports of CRT-P/D therapy in subjects with narrow QRS reporting effects on continuous physiological variables. Twelve studies (2074 patients) met these criteria. Studies were stratified by presence of bias-resistance steps: the presence of a randomized control arm over a single arm, and blinded outcome measurement. Change in each endpoint was quantified using a standardized effect size (Cohen's d). We conducted separate meta-analyses for each variable in turn, stratified by trial quality. In non-randomized, non-blinded studies, the majority of variables (10 of 12, 83%) showed significant improvement, ranging from a standardized mean effect size of +1.57 (95%CI +0.43 to +2.7) for ejection fraction to +2.87 (+1.78 to +3.95) for NYHA class. In the randomized, non-blinded study, only 3 out of 6 variables (50%) showed improvement. For the randomized blinded studies, 0 out of 9 variables (0%) showed benefit, ranging from -0.04 (-0.31 to +0.22) for ejection fraction to -0.1 (-0.73 to +0.53) for 6-minute walk test. Differences in degrees of resistance to bias, rather than choice of endpoint, explain the variation between studies of CRT in narrow-QRS heart failure addressing physiological variables. When bias-resistance features are implemented, it becomes clear that these patients do not improve in any tested physiological variable. Guidance from studies without careful planning to resist bias may be far less useful than commonly perceived. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  14. Resistance controllability and variability improvement in a TaO{sub x}-based resistive memory for multilevel storage application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, A., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr; Song, J.; Hwang, H., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr

    In order to obtain reliable multilevel cell (MLC) characteristics, resistance controllability between the different resistance levels is required especially in resistive random access memory (RRAM), which is prone to resistance variability mainly due to its intrinsic random nature of defect generation and filament formation. In this study, we have thoroughly investigated the multilevel resistance variability in a TaO{sub x}-based nanoscale (<30 nm) RRAM operated in MLC mode. It is found that the resistance variability not only depends on the conductive filament size but also is a strong function of oxygen vacancy concentration in it. Based on the gained insights through experimentalmore » observations and simulation, it is suggested that forming thinner but denser conductive filament may greatly improve the temporal resistance variability even at low operation current despite the inherent stochastic nature of resistance switching process.« less

  15. Instrument Selection for Randomized Controlled Trials Why This and Not That?

    PubMed Central

    Records, Kathie; Keller, Colleen; Ainsworth, Barbara; Permana, Paska

    2011-01-01

    A fundamental linchpin for obtaining rigorous findings in quantitative research involves the selection of survey instruments. Psychometric recommendations are available for the processes for scale development and testing and guidance for selection of established scales. These processes are necessary to address the validity link between the phenomena under investigation, the empirical measures and, ultimately, the theoretical ties between these and the world views of the participants. Detailed information is most often provided about study design and protocols, but far less frequently is a detailed theoretical explanation provided for why specific instruments are chosen. Guidance to inform choices is often difficult to find when scales are needed for specific cultural, ethnic, or racial groups. This paper details the rationale underlying instrument selection for measurement of the major processes (intervention, mediator and moderator variables, outcome variables) in an ongoing study of postpartum Latinas, Madres para la Salud [Mothers for Health]. The rationale underpinning our choices includes a discussion of alternatives, when appropriate. These exemplars may provide direction for other intervention researchers who are working with specific cultural, racial, or ethnic groups or for other investigators who are seeking to select the ‘best’ instrument. Thoughtful consideration of measurement and articulation of the rationale underlying our choices facilitates the maintenance of rigor within the study design and improves our ability to assess study outcomes. PMID:21986392

  16. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    PubMed

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Survey mode matters: adults' self-reported statistical confidence, ability to obtain health information, and perceptions of patient-health-care provider communication.

    PubMed

    Wallace, Lorraine S; Chisolm, Deena J; Abdel-Rasoul, Mahmoud; DeVoe, Jennifer E

    2013-08-01

    This study examined adults' self-reported understanding and formatting preferences of medical statistics, confidence in self-care and ability to obtain health advice or information, and perceptions of patient-health-care provider communication measured through dual survey modes (random digital dial and mail). Even while controlling for sociodemographic characteristics, significant differences in regard to adults' responses to survey variables emerged as a function of survey mode. While the analyses do not allow us to pinpoint the underlying causes of the differences observed, they do suggest that mode of administration should be carefully adjusted for and considered.

  18. Probabilistic evaluation of SSME structural components

    NASA Astrophysics Data System (ADS)

    Rajagopal, K. R.; Newell, J. F.; Ho, H.

    1991-05-01

    The application is described of Composite Load Spectra (CLS) and Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) family of computer codes to the probabilistic structural analysis of four Space Shuttle Main Engine (SSME) space propulsion system components. These components are subjected to environments that are influenced by many random variables. The applications consider a wide breadth of uncertainties encountered in practice, while simultaneously covering a wide area of structural mechanics. This has been done consistent with the primary design requirement for each component. The probabilistic application studies are discussed using finite element models that have been typically used in the past in deterministic analysis studies.

  19. Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo Lijun; Wang Yongjin; Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn

    2013-08-01

    We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.

  20. Random Assignment of Schools to Groups in the Drug Resistance Strategies Rural Project: Some New Methodological Twists

    PubMed Central

    Pettigrew, Jonathan; Miller-Day, Michelle; Krieger, Janice L.; Zhou, Jiangxiu; Hecht, Michael L.

    2014-01-01

    Random assignment to groups is the foundation for scientifically rigorous clinical trials. But assignment is challenging in group randomized trials when only a few units (schools) are assigned to each condition. In the DRSR project, we assigned 39 rural Pennsylvania and Ohio schools to three conditions (rural, classic, control). But even with 13 schools per condition, achieving pretest equivalence on important variables is not guaranteed. We collected data on six important school-level variables: rurality, number of grades in the school, enrollment per grade, percent white, percent receiving free/assisted lunch, and test scores. Key to our procedure was the inclusion of school-level drug use data, available for a subset of the schools. Also, key was that we handled the partial data with modern missing data techniques. We chose to create one composite stratifying variable based on the seven school-level variables available. Principal components analysis with the seven variables yielded two factors, which were averaged to form the composite inflate-suppress (CIS) score which was the basis of stratification. The CIS score was broken into three strata within each state; schools were assigned at random to the three program conditions from within each stratum, within each state. Results showed that program group membership was unrelated to the CIS score, the two factors making up the CIS score, and the seven items making up the factors. Program group membership was not significantly related to pretest measures of drug use (alcohol, cigarettes, marijuana, chewing tobacco; smallest p>.15), thus verifying that pretest equivalence was achieved. PMID:23722619

  1. CAUSAL INFERENCE WITH A GRAPHICAL HIERARCHY OF INTERVENTIONS

    PubMed Central

    Shpitser, Ilya; Tchetgen, Eric Tchetgen

    2017-01-01

    Identifying causal parameters from observational data is fraught with subtleties due to the issues of selection bias and confounding. In addition, more complex questions of interest, such as effects of treatment on the treated and mediated effects may not always be identified even in data where treatment assignment is known and under investigator control, or may be identified under one causal model but not another. Increasingly complex effects of interest, coupled with a diversity of causal models in use resulted in a fragmented view of identification. This fragmentation makes it unnecessarily difficult to determine if a given parameter is identified (and in what model), and what assumptions must hold for this to be the case. This, in turn, complicates the development of estimation theory and sensitivity analysis procedures. In this paper, we give a unifying view of a large class of causal effects of interest, including novel effects not previously considered, in terms of a hierarchy of interventions, and show that identification theory for this large class reduces to an identification theory of random variables under interventions from this hierarchy. Moreover, we show that one type of intervention in the hierarchy is naturally associated with queries identified under the Finest Fully Randomized Causally Interpretable Structure Tree Graph (FFRCISTG) model of Robins (via the extended g-formula), and another is naturally associated with queries identified under the Non-Parametric Structural Equation Model with Independent Errors (NPSEM-IE) of Pearl, via a more general functional we call the edge g-formula. Our results motivate the study of estimation theory for the edge g-formula, since we show it arises both in mediation analysis, and in settings where treatment assignment has unobserved causes, such as models associated with Pearl’s front-door criterion. PMID:28919652

  2. CAUSAL INFERENCE WITH A GRAPHICAL HIERARCHY OF INTERVENTIONS.

    PubMed

    Shpitser, Ilya; Tchetgen, Eric Tchetgen

    2016-12-01

    Identifying causal parameters from observational data is fraught with subtleties due to the issues of selection bias and confounding. In addition, more complex questions of interest, such as effects of treatment on the treated and mediated effects may not always be identified even in data where treatment assignment is known and under investigator control, or may be identified under one causal model but not another. Increasingly complex effects of interest, coupled with a diversity of causal models in use resulted in a fragmented view of identification. This fragmentation makes it unnecessarily difficult to determine if a given parameter is identified (and in what model), and what assumptions must hold for this to be the case. This, in turn, complicates the development of estimation theory and sensitivity analysis procedures. In this paper, we give a unifying view of a large class of causal effects of interest, including novel effects not previously considered, in terms of a hierarchy of interventions, and show that identification theory for this large class reduces to an identification theory of random variables under interventions from this hierarchy. Moreover, we show that one type of intervention in the hierarchy is naturally associated with queries identified under the Finest Fully Randomized Causally Interpretable Structure Tree Graph (FFRCISTG) model of Robins (via the extended g-formula), and another is naturally associated with queries identified under the Non-Parametric Structural Equation Model with Independent Errors (NPSEM-IE) of Pearl, via a more general functional we call the edge g-formula. Our results motivate the study of estimation theory for the edge g-formula, since we show it arises both in mediation analysis, and in settings where treatment assignment has unobserved causes, such as models associated with Pearl's front-door criterion.

  3. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  4. Pathogen-Host Associations and Predicted Range Shifts of Human Monkeypox in Response to Climate Change in Central Africa

    PubMed Central

    Thomassen, Henri A.; Fuller, Trevon; Asefi-Najafabady, Salvi; Shiplacoff, Julia A. G.; Mulembakani, Prime M.; Blumberg, Seth; Johnston, Sara C.; Kisalu, Neville K.; Kinkela, Timothée L.; Fair, Joseph N.; Wolfe, Nathan D.; Shongo, Robert L.; LeBreton, Matthew; Meyer, Hermann; Wright, Linda L.; Muyembe, Jean-Jacques; Buermann, Wolfgang; Okitolonda, Emile; Hensley, Lisa E.; Lloyd-Smith, James O.; Smith, Thomas B.; Rimoin, Anne W.

    2013-01-01

    Climate change is predicted to result in changes in the geographic ranges and local prevalence of infectious diseases, either through direct effects on the pathogen, or indirectly through range shifts in vector and reservoir species. To better understand the occurrence of monkeypox virus (MPXV), an emerging Orthopoxvirus in humans, under contemporary and future climate conditions, we used ecological niche modeling techniques in conjunction with climate and remote-sensing variables. We first created spatially explicit probability distributions of its candidate reservoir species in Africa's Congo Basin. Reservoir species distributions were subsequently used to model current and projected future distributions of human monkeypox (MPX). Results indicate that forest clearing and climate are significant driving factors of the transmission of MPX from wildlife to humans under current climate conditions. Models under contemporary climate conditions performed well, as indicated by high values for the area under the receiver operator curve (AUC), and tests on spatially randomly and non-randomly omitted test data. Future projections were made on IPCC 4th Assessment climate change scenarios for 2050 and 2080, ranging from more conservative to more aggressive, and representing the potential variation within which range shifts can be expected to occur. Future projections showed range shifts into regions where MPX has not been recorded previously. Increased suitability for MPX was predicted in eastern Democratic Republic of Congo. Models developed here are useful for identifying areas where environmental conditions may become more suitable for human MPX; targeting candidate reservoir species for future screening efforts; and prioritizing regions for future MPX surveillance efforts. PMID:23935820

  5. Predictors for Physical Activity in Adolescent Girls Using Statistical Shrinkage Techniques for Hierarchical Longitudinal Mixed Effects Models

    PubMed Central

    Grant, Edward M.; Young, Deborah Rohm; Wu, Tong Tong

    2015-01-01

    We examined associations among longitudinal, multilevel variables and girls’ physical activity to determine the important predictors for physical activity change at different adolescent ages. The Trial of Activity for Adolescent Girls 2 study (Maryland) contributed participants from 8th (2009) to 11th grade (2011) (n=561). Questionnaires were used to obtain demographic, and psychosocial information (individual- and social-level variables); height, weight, and triceps skinfold to assess body composition; interviews and surveys for school-level data; and self-report for neighborhood-level variables. Moderate to vigorous physical activity minutes were assessed from accelerometers. A doubly regularized linear mixed effects model was used for the longitudinal multilevel data to identify the most important covariates for physical activity. Three fixed effects at the individual level and one random effect at the school level were chosen from an initial total of 66 variables, consisting of 47 fixed effects and 19 random effects variables, in additional to the time effect. Self-management strategies, perceived barriers, and social support from friends were the three selected fixed effects, and whether intramural or interscholastic programs were offered in middle school was the selected random effect. Psychosocial factors and friend support, plus a school’s physical activity environment, affect adolescent girl’s moderate to vigorous physical activity longitudinally. PMID:25928064

  6. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  7. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  8. An analytic solution of the stochastic storage problem applicable to soil water

    USGS Publications Warehouse

    Milly, P.C.D.

    1993-01-01

    The accumulation of soil water during rainfall events and the subsequent depletion of soil water by evaporation between storms can be described, to first order, by simple accounting models. When the alternating supplies (precipitation) and demands (potential evaporation) are viewed as random variables, it follows that soil-water storage, evaporation, and runoff are also random variables. If the forcing (supply and demand) processes are stationary for a sufficiently long period of time, an asymptotic regime should eventually be reached where the probability distribution functions of storage, evaporation, and runoff are stationary and uniquely determined by the distribution functions of the forcing. Under the assumptions that the potential evaporation rate is constant, storm arrivals are Poisson-distributed, rainfall is instantaneous, and storm depth follows an exponential distribution, it is possible to derive the asymptotic distributions of storage, evaporation, and runoff analytically for a simple balance model. A particular result is that the fraction of rainfall converted to runoff is given by (1 - R−1)/(eα(1−R−1) − R−1), in which R is the ratio of mean potential evaporation to mean rainfall and a is the ratio of soil water-holding capacity to mean storm depth. The problem considered here is analogous to the well-known problem of storage in a reservoir behind a dam, for which the present work offers a new solution for reservoirs of finite capacity. A simple application of the results of this analysis suggests that random, intraseasonal fluctuations of precipitation cannot by themselves explain the observed dependence of the annual water balance on annual totals of precipitation and potential evaporation.

  9. A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn.

    PubMed

    Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

  10. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    PubMed Central

    Fisher, Aaron; Anderson, G. Brooke; Peng, Roger

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457

  11. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  12. An AUC-based permutation variable importance measure for random forests

    PubMed Central

    2013-01-01

    Background The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. Results We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. Conclusions The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html. PMID:23560875

  13. An AUC-based permutation variable importance measure for random forests.

    PubMed

    Janitza, Silke; Strobl, Carolin; Boulesteix, Anne-Laure

    2013-04-05

    The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html.

  14. Causal Inference and Omitted Variable Bias in Financial Aid Research: Assessing Solutions

    ERIC Educational Resources Information Center

    Riegg, Stephanie K.

    2008-01-01

    This article highlights the problem of omitted variable bias in research on the causal effect of financial aid on college-going. I first describe the problem of self-selection and the resulting bias from omitted variables. I then assess and explore the strengths and weaknesses of random assignment, multivariate regression, proxy variables, fixed…

  15. A Dynamic Bayesian Network Model for the Production and Inventory Control

    NASA Astrophysics Data System (ADS)

    Shin, Ji-Sun; Takazaki, Noriyuki; Lee, Tae-Hong; Kim, Jin-Il; Lee, Hee-Hyol

    In general, the production quantities and delivered goods are changed randomly and then the total stock is also changed randomly. This paper deals with the production and inventory control using the Dynamic Bayesian Network. Bayesian Network is a probabilistic model which represents the qualitative dependence between two or more random variables by the graph structure, and indicates the quantitative relations between individual variables by the conditional probability. The probabilistic distribution of the total stock is calculated through the propagation of the probability on the network. Moreover, an adjusting rule of the production quantities to maintain the probability of a lower limit and a ceiling of the total stock to certain values is shown.

  16. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    NASA Astrophysics Data System (ADS)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  17. School system evaluation by value added analysis under endogeneity.

    PubMed

    Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien

    2014-01-01

    Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.

  18. Measuring multivariate association and beyond

    PubMed Central

    Josse, Julie; Holmes, Susan

    2017-01-01

    Simple correlation coefficients between two variables have been generalized to measure association between two matrices in many ways. Coefficients such as the RV coefficient, the distance covariance (dCov) coefficient and kernel based coefficients are being used by different research communities. Scientists use these coefficients to test whether two random vectors are linked. Once it has been ascertained that there is such association through testing, then a next step, often ignored, is to explore and uncover the association’s underlying patterns. This article provides a survey of various measures of dependence between random vectors and tests of independence and emphasizes the connections and differences between the various approaches. After providing definitions of the coefficients and associated tests, we present the recent improvements that enhance their statistical properties and ease of interpretation. We summarize multi-table approaches and provide scenarii where the indices can provide useful summaries of heterogeneous multi-block data. We illustrate these different strategies on several examples of real data and suggest directions for future research. PMID:29081877

  19. Contingency and statistical laws in replicate microbial closed ecosystems.

    PubMed

    Hekstra, Doeke R; Leibler, Stanislas

    2012-05-25

    Contingency, the persistent influence of past random events, pervades biology. To what extent, then, is each course of ecological or evolutionary dynamics unique, and to what extent are these dynamics subject to a common statistical structure? Addressing this question requires replicate measurements to search for emergent statistical laws. We establish a readily replicated microbial closed ecosystem (CES), sustaining its three species for years. We precisely measure the local population density of each species in many CES replicates, started from the same initial conditions and kept under constant light and temperature. The covariation among replicates of the three species densities acquires a stable structure, which could be decomposed into discrete eigenvectors, or "ecomodes." The largest ecomode dominates population density fluctuations around the replicate-average dynamics. These fluctuations follow simple power laws consistent with a geometric random walk. Thus, variability in ecological dynamics can be studied with CES replicates and described by simple statistical laws. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  1. Physiological and behavioral indices of emotion dysregulation as predictors of outcome from cognitive behavioral therapy and acceptance and commitment therapy for anxiety.

    PubMed

    Davies, Carolyn D; Niles, Andrea N; Pittig, Andre; Arch, Joanna J; Craske, Michelle G

    2015-03-01

    Identifying for whom and under what conditions a treatment is most effective is an essential step toward personalized medicine. The current study examined pre-treatment physiological and behavioral variables as predictors and moderators of outcome in a randomized clinical trial comparing cognitive behavioral therapy (CBT) and acceptance and commitment therapy (ACT) for anxiety disorders. Sixty individuals with a DSM-IV defined principal anxiety disorder completed 12 sessions of either CBT or ACT. Baseline physiological and behavioral variables were measured prior to entering treatment. Self-reported anxiety symptoms were assessed at pre-treatment, post-treatment, and 6- and 12-month follow-up from baseline. Higher pre-treatment heart rate variability was associated with worse outcome across ACT and CBT. ACT outperformed CBT for individuals with high behavioral avoidance. Subjective anxiety levels during laboratory tasks did not predict or moderate treatment outcome. Due to small sample sizes of each disorder, disorder-specific predictors were not tested. Future research should examine these predictors in larger samples and across other outcome variables. Lower heart rate variability was identified as a prognostic indicator of overall outcome, whereas high behavioral avoidance was identified as a prescriptive indicator of superior outcome from ACT versus CBT. Investigation of pre-treatment physiological and behavioral variables as predictors and moderators of outcome may help guide future treatment-matching efforts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. An integrated supply chain model for new products with imprecise production and supply under scenario dependent fuzzy random demand

    NASA Astrophysics Data System (ADS)

    Nagar, Lokesh; Dutta, Pankaj; Jain, Karuna

    2014-05-01

    In the present day business scenario, instant changes in market demand, different source of materials and manufacturing technologies force many companies to change their supply chain planning in order to tackle the real-world uncertainty. The purpose of this paper is to develop a multi-objective two-stage stochastic programming supply chain model that incorporates imprecise production rate and supplier capacity under scenario dependent fuzzy random demand associated with new product supply chains. The objectives are to maximise the supply chain profit, achieve desired service level and minimise financial risk. The proposed model allows simultaneous determination of optimum supply chain design, procurement and production quantities across the different plants, and trade-offs between inventory and transportation modes for both inbound and outbound logistics. Analogous to chance constraints, we have used the possibility measure to quantify the demand uncertainties and the model is solved using fuzzy linear programming approach. An illustration is presented to demonstrate the effectiveness of the proposed model. Sensitivity analysis is performed for maximisation of the supply chain profit with respect to different confidence level of service, risk and possibility measure. It is found that when one considers the service level and risk as robustness measure the variability in profit reduces.

  3. Stochastic population dynamics under resource constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavane, Ajinkya S., E-mail: ajinkyagavane@gmail.com; Nigam, Rahul, E-mail: rahul.nigam@hyderabad.bits-pilani.ac.in

    This paper investigates the population growth of a certain species in which every generation reproduces thrice over a period of predefined time, under certain constraints of resources needed for survival of population. We study the survival period of a species by randomizing the reproduction probabilities within a window at same predefined ages and the resources are being produced by the working force of the population at a variable rate. This randomness in the reproduction rate makes the population growth stochastic in nature and one cannot predict the exact form of evolution. Hence we study the growth by running simulations formore » such a population and taking an ensemble averaged over 500 to 5000 such simulations as per the need. While the population reproduces in a stochastic manner, we have implemented a constraint on the amount of resources available for the population. This is important to make the simulations more realistic. The rate of resource production then is tuned to find the rate which suits the survival of the species. We also compute the mean life time of the species corresponding to different resource production rate. Study for these outcomes in the parameter space defined by the reproduction probabilities and rate of resource production is carried out.« less

  4. Generalized Riemann hypothesis and stochastic time series

    NASA Astrophysics Data System (ADS)

    Mussardo, Giuseppe; LeClair, André

    2018-06-01

    Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.

  5. Evaluation of direct and indirect effects of seasonal malaria chemoprevention in Mali.

    PubMed

    Druetz, Thomas

    2018-05-25

    Randomized controlled trials have established that seasonal malaria chemoprevention (SMC) in children is a promising strategy to reduce malaria transmission in Sahelian West Africa. This strategy was recently introduced in a dozen countries, and about 12 million children received SMC in 2016. However, evidence on SMC effectiveness under routine programme conditions is sparse. We aim to measure the effects of the nationwide SMC programme in Mali on the prevalence of malaria and anemia in children 6-59 months. We used data from the 2015 nationally representative malaria indicator survey. A post-test only with non-randomized control group study was designed. We fitted a generalized structural equation model that controlled for potential bias on observed and non-observed variables (endogenous treatment effect model). Having received SMC reduced by 44% (95% CI [0.39-0.49]) the risk of having a positive rapid diagnostic test for malaria. In addition, the programme indirectly reduced by 18% the risk of moderate-to-severe anemia (95% CI [0.15-0.21]). SMC in Mali has substantial protective effects under routine nationwide programme conditions. Endogenous treatment effects analyses can contribute to rigorously measuring the effectiveness of health programmes and to bridging a widening gap in evaluation methods to measure progress towards achieving malaria elimination.

  6. Echocardiographic evaluation during weaning from mechanical ventilation.

    PubMed

    Schifelbain, Luciele Medianeira; Vieira, Silvia Regina Rios; Brauner, Janete Salles; Pacheco, Deise Mota; Naujorks, Alexandre Antonio

    2011-01-01

    Echocardiographic, electrocardiographic and other cardiorespiratory variables can change during weaning from mechanical ventilation. To analyze changes in cardiac function, using Doppler echocardiogram, in critical patients during weaning from mechanical ventilation, using two different weaning methods: pressure support ventilation and T-tube; and comparing patient subgroups: success vs. failure in weaning. Randomized crossover clinical trial including patients under mechanical ventilation for more than 48 h and considered ready for weaning. Cardiorespiratory variables, oxygenation, electrocardiogram and Doppler echocardiogram findings were analyzed at baseline and after 30 min in pressure support ventilation and T-tube. Pressure support ventilation vs. T-tube and weaning success vs. failure were compared using ANOVA and Student's t-test. The level of significance was p<0.05. Twenty-four adult patients were evaluated. Seven patients failed at the first weaning attempt. No echocardiographic or electrocardiographic differences were observed between pressure support ventilation and T-tube. Weaning failure patients presented increases in left atrium, intraventricular septum thickness, posterior wall thickness and diameter of left ventricle and shorter isovolumetric relaxation time. Successfully weaned patients had higher levels of oxygenation. No differences were observed between Doppler echocardiographic variables and electrocardiographic and other cardiorespiratory variables during pressure support ventilation and T-tube. However cardiac structures were smaller, isovolumetric relaxation time was larger, and oxygenation level was greater in successfully weaned patients.

  7. Statistical Modelling of Temperature and Moisture Uptake of Biochars Exposed to Selected Relative Humidity of Air.

    PubMed

    Bastistella, Luciane; Rousset, Patrick; Aviz, Antonio; Caldeira-Pires, Armando; Humbert, Gilles; Nogueira, Manoel

    2018-02-09

    New experimental techniques, as well as modern variants on known methods, have recently been employed to investigate the fundamental reactions underlying the oxidation of biochar. The purpose of this paper was to experimentally and statistically study how the relative humidity of air, mass, and particle size of four biochars influenced the adsorption of water and the increase in temperature. A random factorial design was employed using the intuitive statistical software Xlstat. A simple linear regression model and an analysis of variance with a pairwise comparison were performed. The experimental study was carried out on the wood of Quercus pubescens , Cyclobalanopsis glauca , Trigonostemon huangmosun , and Bambusa vulgaris , and involved five relative humidity conditions (22, 43, 75, 84, and 90%), two mass samples (0.1 and 1 g), and two particle sizes (powder and piece). Two response variables including water adsorption and temperature increase were analyzed and discussed. The temperature did not increase linearly with the adsorption of water. Temperature was modeled by nine explanatory variables, while water adsorption was modeled by eight. Five variables, including factors and their interactions, were found to be common to the two models. Sample mass and relative humidity influenced the two qualitative variables, while particle size and biochar type only influenced the temperature.

  8. A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.

    2018-05-01

    Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.

  9. Wavevector-Frequency Analysis with Applications to Acoustics

    DTIC Science & Technology

    1994-01-01

    Turbulent Boundary Layer Pressure Measured by Microphone Arrays," Journal of the Acoustical Society of America, vol. 49, no. 3, March 1971 , pp. 862-877. 1...ARplications of Green’s FuntionsinScie,.-and Enginlering, Prentice-Hall, Inc., Englewood Hills, NJ, 1971 . 9. 3. Ffowcs-Williams et al., Modern Methods for...variables of a random process are kalled Joint w.merit ,. The m,n-th joint moment of the random variables, v and w, iz flefined by E ,N 1 f (aB) do d- where

  10. Evaluating the Bias of Alternative Cost Progress Models: Tests Using Aerospace Industry Acquisition Programs

    DTIC Science & Technology

    1992-12-01

    suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model

  11. Factors affecting the adoption of home-heating energy-conservation measures: a behavioral approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macey, S.M.

    1982-01-01

    The basic aim of this research is to better understand homeowners' adoption of home-heating energy-conservation measures by analyzing a number of factors that are thought to be underlying determinants of adoption behavior. The basic approach is behavioral drawing on the knowledge built up in behavioral geography through studies on natural hazards and innovation diffusion, and borrowing from psychological theories of attitude formation and decision making. In particular, six factors (information, environmental personality, socio-economic and demographic factors, dwelling unit characteristics, psychological variables, and past experience) are shown to directly and indirectly affect adoption behavior. By this means, differences between adopters andmore » nonadopters in the underlying cognitive structures and in the situational factors that affect their decisions are identified. The study focuses on the adoption of three measures: reducing winter night-time thermostat settings, changing or cleaning furnace filters, and installing an automatic setback thermostat. Personal interviews with a random sample of 159 homeowners in Decatur, Illinois serve as the main data base. Results indicate that adoption behavior is determined more by past experience, than by intention. Beliefs, attitudes, and social influences affect behavior indirectly through intention. These psychological variables also act as mediators between information, knowledge, environmental personality, situational variables and behavior. In particular, respondent's age, previous home ownership, and length of residence act indirectly on adoption behavior. Each of these reflects the amount of past experience the respondent is likely to have.« less

  12. Efficacy of a comfrey root extract ointment in comparison to a diclofenac gel in the treatment of ankle distortions: results of an observer-blind, randomized, multicenter study.

    PubMed

    Predel, H G; Giannetti, B; Koll, R; Bulitta, M; Staiger, C

    2005-11-01

    In the treatment of minor blunt injuries several topical drugs are known to have anti-inflammatory and analgesic properties. They represent, however, two fundamentally different major pharmacological therapy approaches: the "chemical-synthetical" and the "phytotherapeutical" approach. The main objective of this trial (CODEC_2004) was to compare the efficacy and tolerability of an ointment of Comfrey extract (Extr. Rad. Symphyti) with that of a Diclofenac gel in the treatment of acute unilateral ankle sprain (distortion). In a single-blind, controlled, randomized, parallel-group, multicenter and confirmatory clinical trial outpatients with acute unilateral ankle sprains (n=164, mean age 29.0 years, 47.6% female) received either a 6 cm long ointment layer of Kytta-Salbe f (Comfrey extract) (n=82) or of Diclofenac gel containing 1.16 g of diclofenac diethylamine salt (n=82) for 7 +/- 1 days, four times a day. Primary variable was the area-under-the-curve (AUC) of the pain reaction to pressure on the injured area measured by a calibrated caliper (tonometer). Secondary variables were the circumference of the joint (swelling; figure-of-eight method), the individual spontaneous pain sensation at rest and at movement according to a Visual Analogue Scale (VAS), the judgment of impaired movements of the injured joint by the method of "neutral-zero", consumption of rescue medication (paracetamol), as well as the global efficacy evaluation and the global assessment of tolerability (both by physician and patient, 4 ranks). In this study the primary variable was also to be validated prospectively. It was confirmatorily shown that Comfrey extract is non-inferior to diclofenac. The 95% confidence interval for the AUC (Comfrey extract minus Diclofenac gel) was 19.01-103.09h*N/cm2 and was completely above the margin of non-inferiority. Moreover, the results of the primary and secondary variables indicate that Comfrey extract may be superior to Diclofenac gel.

  13. A 1-year randomized study to evaluate the effects of a dose reduction in oral contraceptives on lipids and carbohydrate metabolism: 20 microg ethinyl estradiol combined with 100 microg levonorgestrel.

    PubMed

    Skouby, Sven O; Endrikat, Jan; Düsterberg, Bernd; Schmidt, Werner; Gerlinger, Christoph; Wessel, Jens; Goldstein, Henri; Jespersen, Joergen

    2005-02-01

    To evaluate the impact on lipid and carbohydrate variables of a combined one-third ethinyl estradiol (EE)/levonorgestrel (LNG) dose reduction in oral contraceptives. In an open-label, randomized study, a dose-reduced oral contraceptive containing 20 microg EE and 100 microg LNG (20 EE/100 LNG) was compared with a reference preparation containing 30 microg EE and 150 microg LNG (30 EE/150 LNG). One-year data from 48 volunteers were obtained. We found a decrease of HDL2 cholesterol and increases of low-density lipoprotein cholesterol, very low-density lipoprotein cholesterol and total triglycerides in both treatment groups from baseline to the 13th treatment cycle. Although for four of six variables, the changes in the 20 EE group were lower compared with the 30 EE group, none of the differences between the two treatments were statistically significant. The median values for the fasting levels of insulin, C-peptide and free fatty acids slightly increased or remained unchanged while the fasting glucose levels slightly decreased after 13 treatment cycles. While the glucose area under the curve (AUC) (0-3 h) was similar in both groups during the OGTT, the insulin AUC(0-3 h) was less increased in the 20 EE/100 LNG group compared with the 30 EE/150 LNG group. None of the differences between the treatment groups for any of the carbohydrate metabolism variables were statistically significant at any time point. Both study treatments were safe and well tolerated by the volunteers. Similar effects on the lipid and carbohydrate profiles were found for both preparations. The balanced one-third EE dose reduction in this new oral contraceptive caused slightly lower, but insignificant, changes in the lipid and carbohydrate variables compared with the reference treatment.

  14. Insecticide-Treated Nets Utilization and Associated Factors among under-5 Years Old Children in Mirab-Abaya District, Gamo-Gofa Zone, Ethiopia.

    PubMed

    Admasie, Amha; Zemba, Amanuel; Paulos, Wondimagegn

    2018-01-01

    Malaria can be prevented using cost-effective interventions. It can be prevented at large via the use of insecticide-treated mosquito nets (ITNs). The use of ITNs decreases malaria mortality rates by 55% in under-5 years old children in Africa, Ethiopia, realizing the effectiveness, scaling up distribution and utilization of ITNs to cover 100% of children less than 5 years of age. However, little is known about ITNs utilization and factors associated with the utilization in under-5 years old children in the study area yet. The purpose of this study was to assess the level and associated factors of ITNs utilization in under-5 years old children among households with under-5 years old children of Mirab Abaya District, Gamo Gofa Zone, Ethiopia. A community-based cross-sectional study was conducted during August-September, 2016. Six study Kebeles were identified by simple random sampling technique and 398 households with at least one under-5 years old children were selected by random sampling technique using computer generated random numbers from health post family folders. Structured, interviewer questionnaire was administered to mothers or care givers of the children. Data were entered to Epi Info Version 3.5 and analyzed in SPSS version 21 statistical software. Bivariate and multivariate logistic regression analysis was done. P -value <0.05 and odds ratio with 95% confidence interval were used for the determination of associations between dependent and predictor variables. Results were presented in narrations, tables, and graph. Among 398 under-5 years old children assessed, the majority, 362 (91.0%) of them had access to ITN, but only 137 (37.2%) of the child had ITNs utilization during the previous night prior to the survey. Households with age of mothers or caretakers 31-44 years, AOR = 0.03, 95% CI (0.01-0.07) and ≥45 years of age; AOR = 0.05, 95% CI (0.01-0.58); households with family size ≤5 members, AOR = 11.23, 95% CI (4.31-29.24); and households with sleeping space ≥2, AOR = 13.59, 95% CI (4.40-41.93) were found to be significantly associated with under-5 years old children ITNs utilization. Even though, a significant proportions of under-5 years old children had access to ITN, only one-third of the participant child utilized it properly.

  15. Medicare Part D Roulette: Potential Implications of Random Assignment and Plan Restrictions

    PubMed Central

    Patel, Rajul A.; Walberg, Mark P.; Woelfel, Joseph A.; Amaral, Michelle M.; Varu, Paresh

    2013-01-01

    Background Dual-eligible (Medicare/Medicaid) beneficiaries are randomly assigned to a benchmark plan, which provides prescription drug coverage under the Part D benefit without consideration of their prescription drug profile. To date, the potential for beneficiary assignment to a plan with poor formulary coverage has been minimally studied and the resultant financial impact to beneficiaries unknown. Objective We sought to determine cost variability and drug use restrictions under each available 2010 California benchmark plan. Methods Dual-eligible beneficiaries were provided Part D plan assistance during the 2010 annual election period. The Medicare Web site was used to determine benchmark plan costs and prescription utilization restrictions for each of the six California benchmark plans available for random assignment in 2010. A standardized survey was used to record all de-identified beneficiary demographic and plan specific data. For each low-income subsidy-recipient (n = 113), cost, rank, number of non-formulary medications, and prescription utilization restrictions were recorded for each available 2010 California benchmark plan. Formulary matching rates (percent of beneficiary's medications on plan formulary) were calculated for each benchmark plan. Results Auto-assigned beneficiaries had only a 34% chance of being assigned to the lowest cost plan; the remainder faced potentially significant avoidable out-of-pocket costs. Wide variations between benchmark plans were observed for plan cost, formulary coverage, formulary matching rates, and prescription utilization restrictions. Conclusions Beneficiaries had a 66% chance of being assigned to a sub-optimal plan; thereby, they faced significant avoidable out-of-pocket costs. Alternative methods of beneficiary assignment could decrease beneficiary and Medicare costs while also reducing medication non-compliance. PMID:24753963

  16. Network Mendelian randomization: using genetic variants as instrumental variables to investigate mediation in causal pathways

    PubMed Central

    Burgess, Stephen; Daniel, Rhian M; Butterworth, Adam S; Thompson, Simon G

    2015-01-01

    Background: Mendelian randomization uses genetic variants, assumed to be instrumental variables for a particular exposure, to estimate the causal effect of that exposure on an outcome. If the instrumental variable criteria are satisfied, the resulting estimator is consistent even in the presence of unmeasured confounding and reverse causation. Methods: We extend the Mendelian randomization paradigm to investigate more complex networks of relationships between variables, in particular where some of the effect of an exposure on the outcome may operate through an intermediate variable (a mediator). If instrumental variables for the exposure and mediator are available, direct and indirect effects of the exposure on the outcome can be estimated, for example using either a regression-based method or structural equation models. The direction of effect between the exposure and a possible mediator can also be assessed. Methods are illustrated in an applied example considering causal relationships between body mass index, C-reactive protein and uric acid. Results: These estimators are consistent in the presence of unmeasured confounding if, in addition to the instrumental variable assumptions, the effects of both the exposure on the mediator and the mediator on the outcome are homogeneous across individuals and linear without interactions. Nevertheless, a simulation study demonstrates that even considerable heterogeneity in these effects does not lead to bias in the estimates. Conclusions: These methods can be used to estimate direct and indirect causal effects in a mediation setting, and have potential for the investigation of more complex networks between multiple interrelated exposures and disease outcomes. PMID:25150977

  17. Research on photodiode detector-based spatial transient light detection and processing system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Yang; Zhao, Hui; Nan, Meng

    2016-10-01

    In order to realize real-time signal identification and processing of spatial transient light, the features and the energy of the captured target light signal are first described and quantitatively calculated. Considering that the transient light signal has random occurrence, a short duration and an evident beginning and ending, a photodiode detector based spatial transient light detection and processing system is proposed and designed in this paper. This system has a large field of view and is used to realize non-imaging energy detection of random, transient and weak point target under complex background of spatial environment. Weak signal extraction under strong background is difficult. In this paper, considering that the background signal changes slowly and the target signal changes quickly, filter is adopted for signal's background subtraction. A variable speed sampling is realized by the way of sampling data points with a gradually increased interval. The two dilemmas that real-time processing of large amount of data and power consumption required by the large amount of data needed to be stored are solved. The test results with self-made simulative signal demonstrate the effectiveness of the design scheme. The practical system could be operated reliably. The detection and processing of the target signal under the strong sunlight background was realized. The results indicate that the system can realize real-time detection of target signal's characteristic waveform and monitor the system working parameters. The prototype design could be used in a variety of engineering applications.

  18. Iyengar-Yoga Compared to Exercise as a Therapeutic Intervention during (Neo)adjuvant Therapy in Women with Stage I-III Breast Cancer: Health-Related Quality of Life, Mindfulness, Spirituality, Life Satisfaction, and Cancer-Related Fatigue.

    PubMed

    Lötzke, Désirée; Wiedemann, Florian; Rodrigues Recchia, Daniela; Ostermann, Thomas; Sattler, Daniel; Ettl, Johannes; Kiechle, Marion; Büssing, Arndt

    2016-01-01

    This study aims to test the effects of yoga on health-related quality of life, life satisfaction, cancer-related fatigue, mindfulness, and spirituality compared to conventional therapeutic exercises during (neo)adjuvant cytotoxic and endocrine therapy in women with breast cancer. In a randomized controlled trial 92 women with breast cancer undergoing oncological treatment were randomly enrolled for a yoga intervention (YI) (n = 45) or for a physical exercise intervention (PEI) (n = 47). Measurements were obtained before (t 0) and after the intervention (t 1) as well as 3 months after finishing intervention (t 2) using standardized questionnaires. Life satisfaction and fatigue improved under PEI (p < 0.05) but not under YI (t 0 to t 2). Regarding quality of life (EORTC QLQ-C30) a direct effect (t 0 to t 1; p < 0.001) of YI was found on role and emotional functioning, while under PEI only emotional functioning improved. Significant improvements (p < 0.001) were observed at both t 1 and t 2 also for symptom scales in both groups: dyspnea, appetite loss, constipation, and diarrhea. There was no significant difference between therapies for none of the analyzed variables neither for t 1 nor for t 2. During chemotherapy, yoga was not seen as more helpful than conventional therapeutic exercises. This does not argue against its use in the recovery phase.

  19. Iyengar-Yoga Compared to Exercise as a Therapeutic Intervention during (Neo)adjuvant Therapy in Women with Stage I–III Breast Cancer: Health-Related Quality of Life, Mindfulness, Spirituality, Life Satisfaction, and Cancer-Related Fatigue

    PubMed Central

    Lötzke, Désirée; Wiedemann, Florian; Rodrigues Recchia, Daniela; Ostermann, Thomas; Sattler, Daniel; Ettl, Johannes; Kiechle, Marion; Büssing, Arndt

    2016-01-01

    This study aims to test the effects of yoga on health-related quality of life, life satisfaction, cancer-related fatigue, mindfulness, and spirituality compared to conventional therapeutic exercises during (neo)adjuvant cytotoxic and endocrine therapy in women with breast cancer. In a randomized controlled trial 92 women with breast cancer undergoing oncological treatment were randomly enrolled for a yoga intervention (YI) (n = 45) or for a physical exercise intervention (PEI) (n = 47). Measurements were obtained before (t 0) and after the intervention (t 1) as well as 3 months after finishing intervention (t 2) using standardized questionnaires. Life satisfaction and fatigue improved under PEI (p < 0.05) but not under YI (t 0 to t 2). Regarding quality of life (EORTC QLQ-C30) a direct effect (t 0 to t 1; p < 0.001) of YI was found on role and emotional functioning, while under PEI only emotional functioning improved. Significant improvements (p < 0.001) were observed at both t 1 and t 2 also for symptom scales in both groups: dyspnea, appetite loss, constipation, and diarrhea. There was no significant difference between therapies for none of the analyzed variables neither for t 1 nor for t 2. During chemotherapy, yoga was not seen as more helpful than conventional therapeutic exercises. This does not argue against its use in the recovery phase. PMID:27019663

  20. Design of a factorial experiment with randomization restrictions to assess medical device performance on vascular tissue.

    PubMed

    Diestelkamp, Wiebke S; Krane, Carissa M; Pinnell, Margaret F

    2011-05-20

    Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance.

  1. Applications of Geostatistics in Plant Nematology

    PubMed Central

    Wallace, M. K.; Hawkins, D. M.

    1994-01-01

    The application of geostatistics to plant nematology was made by evaluating soil and nematode data acquired from 200 soil samples collected from the Ap horizon of a reed canary-grass field in northern Minnesota. Geostatistical concepts relevant to nematology include semi-variogram modelling, kriging, and change of support calculations. Soil and nematode data generally followed a spherical semi-variogram model, with little random variability associated with soil data and large inherent variability for nematode data. Block kriging of soil and nematode data provided useful contour maps of the data. Change of snpport calculations indicated that most of the random variation in nematode data was due to short-range spatial variability in the nematode population densities. PMID:19279938

  2. Applications of geostatistics in plant nematology.

    PubMed

    Wallace, M K; Hawkins, D M

    1994-12-01

    The application of geostatistics to plant nematology was made by evaluating soil and nematode data acquired from 200 soil samples collected from the A(p) horizon of a reed canary-grass field in northern Minnesota. Geostatistical concepts relevant to nematology include semi-variogram modelling, kriging, and change of support calculations. Soil and nematode data generally followed a spherical semi-variogram model, with little random variability associated with soil data and large inherent variability for nematode data. Block kriging of soil and nematode data provided useful contour maps of the data. Change of snpport calculations indicated that most of the random variation in nematode data was due to short-range spatial variability in the nematode population densities.

  3. Mathematical and physical meaning of the Bell inequalities

    NASA Astrophysics Data System (ADS)

    Santos, Emilio

    2016-09-01

    It is shown that the Bell inequalities are closely related to the triangle inequalities involving distance functions amongst pairs of random variables with values \\{0,1\\}. A hidden variables model may be defined as a mapping between a set of quantum projection operators and a set of random variables. The model is noncontextual if there is a joint probability distribution. The Bell inequalities are necessary conditions for its existence. The inequalities are most relevant when measurements are performed at space-like separation, thus showing a conflict between quantum mechanics and local realism (Bell's theorem). The relations of the Bell inequalities with contextuality, the Kochen-Specker theorem, and quantum entanglement are briefly discussed.

  4. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    ERIC Educational Resources Information Center

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  5. PATTERN PREDICTION OF ACADEMIC SUCCESS.

    ERIC Educational Resources Information Center

    LUNNEBORG, CLIFFORD E.; LUNNEBORG, PATRICIA W.

    A TECHNIQUE OF PATTERN ANALYSIS WHICH EMPHASIZES THE DEVELOPMENT OF MORE EFFECTIVE WAYS OF SCORING A GIVEN SET OF VARIABLES WAS FORMULATED. TO THE ORIGINAL VARIABLES WERE SUCCESSIVELY ADDED TWO, THREE, AND FOUR VARIABLE PATTERNS AND THE INCREASE IN PREDICTIVE EFFICIENCY ASSESSED. RANDOMLY SELECTED HIGH SCHOOL SENIORS WHO HAD PARTICIPATED IN THE…

  6. Measurement variability error for estimates of volume change

    Treesearch

    James A. Westfall; Paul L. Patterson

    2007-01-01

    Using quality assurance data, measurement variability distributions were developed for attributes that affect tree volume prediction. Random deviations from the measurement variability distributions were applied to 19381 remeasured sample trees in Maine. The additional error due to measurement variation and measurement bias was estimated via a simulation study for...

  7. Effectiveness of sensor-augmented pump therapy in children and adolescents with type 1 diabetes in the STAR 3 study.

    PubMed

    Slover, Robert H; Welsh, John B; Criego, Amy; Weinzimer, Stuart A; Willi, Steven M; Wood, Michael A; Tamborlane, William V

    2012-02-01

    Maintenance of appropriate A1C values and minimization of hyperglycemic excursions are difficult for many pediatric patients with type 1 diabetes. Continuous glucose monitoring (CGM) sensor-augmented pump (SAP) therapy is an alternative to multiple daily injection (MDI) therapy in this population. Sensor-augmented pump therapy for A1C reduction (STAR 3) was a 1-yr trial that included 82 children (aged 7-12) and 74 adolescents (aged 13-18) with A1C values ranging from 7.4 to 9.5% who were randomized to either SAP or MDI therapy. Quarterly A1C values were obtained from all subjects. CGM studies were carried out at baseline, 6 months, and 12 months to quantify glycemic excursions [calculated as area under the glucose concentration-time curve (AUC)] and variability. In the SAP group, sensor compliance was recorded. Baseline A1C values were similar in subjects randomized to the SAP (8.26 ± 0.55%) and MDI groups (8.30 ± 0.53%). All subsequent A1C values showed significant (p < 0.05) treatment group differences favoring SAP therapy. Compared with the MDI group, subjects in the SAP group were more likely to meet age-specific A1C targets and had lower AUC values for hyperglycemia with no increased risk of hypoglycemia. Glucose variability improved in the SAP group compared to the MDI group. Children wore CGM sensors more often and were more likely to reach age-specific A1C targets than adolescents. SAP therapy allows both children and adolescents with marginally or inadequately controlled type 1 diabetes to reduce A1C values, hyperglycemic excursions, and glycemic variability in a rapid, sustainable, and safe manner. © 2011 John Wiley & Sons A/S.

  8. Impact of perioperative blood pressure variability on health resource utilization after cardiac surgery: an analysis of the ECLIPSE trials.

    PubMed

    Aronson, Solomon; Levy, Jerrold H; Lumb, Philip D; Fontes, Manuel; Wang, Yamei; Crothers, Tracy A; Sulham, Katherine A; Navetta, Marco S

    2014-06-01

    To examine the impact of blood pressure control on hospital health resource utilization using data from the ECLIPSE trials. Post-hoc analysis of data from 3 prospective, open-label, randomized clinical trials (ECLIPSE trials). Sixty-one medical centers in the United States. Patients 18 years or older undergoing cardiac surgery. Clevidipine was compared with nitroglycerin, sodium nitroprusside, and nicardipine. The ECLIPSE trials included 3 individual randomized open-label studies comparing clevidipine to nitroglycerin, sodium nitroprusside, and nicardipine. Blood pressure control was assessed as the integral of the cumulative area under the curve (AUC) outside specified systolic blood pressure ranges, such that lower AUC represents less variability. This analysis examined surgery duration, time to extubation, as well as intensive care unit (ICU) and hospital length of stay (LOS) in patients with AUC≤10 mmHg×min/h compared to patients with AUC>10 mmHg×min/h. One thousand four hundred ten patients were included for analysis; 736 patients (52%) had an AUC≤10 mmHg×min/h, and 674 (48%) had an AUC>10 mmHg×min/h. The duration of surgery and ICU LOS were similar between groups. Time to extubation and postoperative LOS were both significantly shorter (p = 0.05 and p<0.0001, respectively) in patients with AUC≤10. Multivariate analysis demonstrates AUC≤10 was significantly and independently associated with decreased time to extubation (hazard ratio 1.132, p = 0.0261) and postoperative LOS (hazard ratio 1.221, p = 0.0006). Based on data derived from the ECLIPSE studies, increased perioperative BP variability is associated with delayed time to extubation and increased postoperative LOS. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Variable order fractional Fokker-Planck equations derived from Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Straka, Peter

    2018-08-01

    Continuous Time Random Walk models (CTRW) of anomalous diffusion are studied, where the anomalous exponent β(x) ∈(0 , 1) varies in space. This type of situation occurs e.g. in biophysics, where the density of the intracellular matrix varies throughout a cell. Scaling limits of CTRWs are known to have probability distributions which solve fractional Fokker-Planck type equations (FFPE). This correspondence between stochastic processes and FFPE solutions has many useful extensions e.g. to nonlinear particle interactions and reactions, but has not yet been sufficiently developed for FFPEs of the "variable order" type with non-constant β(x) . In this article, variable order FFPEs (VOFFPE) are derived from scaling limits of CTRWs. The key mathematical tool is the 1-1 correspondence of a CTRW scaling limit to a bivariate Langevin process, which tracks the cumulative sum of jumps in one component and the cumulative sum of waiting times in the other. The spatially varying anomalous exponent is modelled by spatially varying β(x) -stable Lévy noise in the waiting time component. The VOFFPE displays a spatially heterogeneous temporal scaling behaviour, with generalized diffusivity and drift coefficients whose units are length2/timeβ(x) resp. length/timeβ(x). A global change of the time scale results in a spatially varying change in diffusivity and drift. A consequence of the mathematical derivation of a VOFFPE from CTRW limits in this article is that a solution of a VOFFPE can be approximated via Monte Carlo simulations. Based on such simulations, we are able to confirm that the VOFFPE is consistent under a change of the global time scale.

  10. Predictors of Poor School Readiness in Children Without Developmental Delay at Age 2

    PubMed Central

    Dudovitz, Rebecca N.; Coker, Tumaini R.; Barnert, Elizabeth S.; Biely, Christopher; Li, Ning; Szilagyi, Peter G.; Larson, Kandyce; Halfon, Neal; Zimmerman, Frederick J.; Chung, Paul J.

    2016-01-01

    BACKGROUND AND OBJECTIVES: Current recommendations emphasize developmental screening and surveillance to identify developmental delays (DDs) for referral to early intervention (EI) services. Many young children without DDs, however, are at high risk for poor developmental and behavioral outcomes by school entry but are ineligible for EI. We developed models for 2-year-olds without DD that predict, at kindergarten entry, poor academic performance and high problem behaviors. METHODS: Data from the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B), were used for this study. The analytic sample excluded children likely eligible for EI because of DDs or very low birth weight. Dependent variables included low academic scores and high problem behaviors at the kindergarten wave. Regression models were developed by using candidate predictors feasibly obtainable during typical 2-year well-child visits. Models were cross-validated internally on randomly selected subsamples. RESULTS: Approximately 24% of all 2-year-old children were ineligible for EI at 2 years of age but still had poor academic or behavioral outcomes at school entry. Prediction models each contain 9 variables, almost entirely parental, social, or economic. Four variables were associated with both academic and behavioral risk: parental education below bachelor’s degree, little/no shared reading at home, food insecurity, and fair/poor parental health. Areas under the receiver-operating characteristic curve were 0.76 for academic risk and 0.71 for behavioral risk. Adding the mental scale score from the Bayley Short Form–Research Edition did not improve areas under the receiver-operating characteristic curve for either model. CONCLUSIONS: Among children ineligible for EI services, a small set of clinically available variables at age 2 years predicted academic and behavioral outcomes at school entry. PMID:27432845

  11. Derivation and Internal Validation of a Clinical Prediction Tool for 30-Day Mortality in Lower Gastrointestinal Bleeding.

    PubMed

    Sengupta, Neil; Tapper, Elliot B

    2017-05-01

    There are limited data to predict which patients with lower gastrointestinal bleeding are at risk for adverse outcomes. We aimed to develop a clinical tool based on admission variables to predict 30-day mortality in lower gastrointestinal bleeding. We used a validated machine learning algorithm to identify adult patients hospitalized with lower gastrointestinal bleeding at an academic medical center between 2008 and 2015. The cohort was split randomly into derivation and validation cohorts. In the derivation cohort, we used multiple logistic regression on all candidate admission variables to create a prediction model for 30-day mortality, using area under the receiving operator characteristic curve and misclassification rate to estimate prediction accuracy. Regression coefficients were used to derive an integer score, and mortality risk associated with point totals was assessed. In the derivation cohort (n = 4044), 8 variables were most associated with 30-day mortality: age, dementia, metastatic cancer, chronic kidney disease, chronic pulmonary disease, anticoagulant use, admission hematocrit, and albumin. The model yielded a misclassification rate of 0.06 and area under the curve of 0.81. The integer score ranged from -10 to 26 in the derivation cohort, with a misclassification rate of 0.11 and area under the curve of 0.74. In the validation cohort (n = 2060), the score had an area under the curve of 0.72 with a misclassification rate of 0.12. After dividing the score into 4 quartiles of risk, 30-day mortality in the derivation and validation sets was 3.6% and 4.4% in quartile 1, 4.9% and 7.3% in quartile 2, 9.9% and 9.1% in quartile 3, and 24% and 26% in quartile 4, respectively. A clinical tool can be used to predict 30-day mortality in patients hospitalized with lower gastrointestinal bleeding. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Bias and Bias Correction in Multi-Site Instrumental Variables Analysis of Heterogeneous Mediator Effects

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Unlu, Faith; Zhu, Pei; Bloom, Howard

    2013-01-01

    We explore the use of instrumental variables (IV) analysis with a multi-site randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, as assumption known in the instrumental variables literature as the…

  13. Testing homogeneity in Weibull-regression models.

    PubMed

    Bolfarine, Heleno; Valença, Dione M

    2005-10-01

    In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

  14. Condensation with two constraints and disorder

    NASA Astrophysics Data System (ADS)

    Barré, J.; Mangeolle, L.

    2018-04-01

    We consider a set of positive random variables obeying two additive constraints, a linear and a quadratic one; these constraints mimic the conservation laws of a dynamical system. In the simplest setting, without disorder, it is known that such a system may undergo a ‘condensation’ transition, whereby one random variable becomes much larger than the others; this transition has been related to the spontaneous appearance of non linear localized excitations in certain nonlinear chains, called breathers. Motivated by the study of breathers in a disordered discrete nonlinear Schrödinger equation, we study different instances of this problem in presence of a quenched disorder. Unless the disorder is too strong, the phase diagram looks like the one without disorder, with a transition separating a fluid phase, where all variables have the same order of magnitude, and a condensed phase, where one variable is much larger than the others. We then show that the condensed phase exhibits various degrees of ‘intermediate symmetry breaking’: the site hosting the condensate is chosen neither uniformly at random, nor is it fixed by the disorder realization. Throughout the article, our heuristic arguments are complemented with direct Monte Carlo simulations.

  15. Free variable selection QSPR study to predict 19F chemical shifts of some fluorinated organic compounds using Random Forest and RBF-PLS methods

    NASA Astrophysics Data System (ADS)

    Goudarzi, Nasser

    2016-04-01

    In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.

  16. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  17. Dissociable effects of practice variability on learning motor and timing skills.

    PubMed

    Caramiaux, Baptiste; Bevilacqua, Frédéric; Wanderley, Marcelo M; Palmer, Caroline

    2018-01-01

    Motor skill acquisition inherently depends on the way one practices the motor task. The amount of motor task variability during practice has been shown to foster transfer of the learned skill to other similar motor tasks. In addition, variability in a learning schedule, in which a task and its variations are interweaved during practice, has been shown to help the transfer of learning in motor skill acquisition. However, there is little evidence on how motor task variations and variability schedules during practice act on the acquisition of complex motor skills such as music performance, in which a performer learns both the right movements (motor skill) and the right time to perform them (timing skill). This study investigated the impact of rate (tempo) variability and the schedule of tempo change during practice on timing and motor skill acquisition. Complete novices, with no musical training, practiced a simple musical sequence on a piano keyboard at different rates. Each novice was assigned to one of four learning conditions designed to manipulate the amount of tempo variability across trials (large or small tempo set) and the schedule of tempo change (randomized or non-randomized order) during practice. At test, the novices performed the same musical sequence at a familiar tempo and at novel tempi (testing tempo transfer), as well as two novel (but related) sequences at a familiar tempo (testing spatial transfer). We found that practice conditions had little effect on learning and transfer performance of timing skill. Interestingly, practice conditions influenced motor skill learning (reduction of movement variability): lower temporal variability during practice facilitated transfer to new tempi and new sequences; non-randomized learning schedule improved transfer to new tempi and new sequences. Tempo (rate) and the sequence difficulty (spatial manipulation) affected performance variability in both timing and movement. These findings suggest that there is a dissociable effect of practice variability on learning complex skills that involve both motor and timing constraints.

  18. Soil variability in engineering applications

    NASA Astrophysics Data System (ADS)

    Vessia, Giovanna

    2014-05-01

    Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.

  19. Risk assessment in man and mouse.

    PubMed

    Balci, Fuat; Freestone, David; Gallistel, Charles R

    2009-02-17

    Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment.

  20. Risk assessment in man and mouse

    PubMed Central

    Balci, Fuat; Freestone, David; Gallistel, Charles R.

    2009-01-01

    Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment. PMID:19188592

Top